00:00:00.001 Started by upstream project "autotest-per-patch" build number 126204 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.126 > git --version # 'git version 2.39.2' 00:00:00.126 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.263 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.273 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.283 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.283 > git config core.sparsecheckout # timeout=10 00:00:03.292 > git read-tree -mu HEAD # timeout=10 00:00:03.308 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.326 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.326 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.454 [Pipeline] Start of Pipeline 00:00:03.468 [Pipeline] library 00:00:03.470 Loading library shm_lib@master 00:00:03.470 Library shm_lib@master is cached. Copying from home. 00:00:03.530 [Pipeline] node 00:00:03.535 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.537 [Pipeline] { 00:00:03.545 [Pipeline] catchError 00:00:03.547 [Pipeline] { 00:00:03.557 [Pipeline] wrap 00:00:03.564 [Pipeline] { 00:00:03.571 [Pipeline] stage 00:00:03.572 [Pipeline] { (Prologue) 00:00:03.744 [Pipeline] sh 00:00:04.022 + logger -p user.info -t JENKINS-CI 00:00:04.046 [Pipeline] echo 00:00:04.049 Node: GP11 00:00:04.059 [Pipeline] sh 00:00:04.359 [Pipeline] setCustomBuildProperty 00:00:04.370 [Pipeline] echo 00:00:04.371 Cleanup processes 00:00:04.375 [Pipeline] sh 00:00:04.655 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.655 1302802 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.669 [Pipeline] sh 00:00:04.951 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.951 ++ grep -v 'sudo pgrep' 00:00:04.951 ++ awk '{print $1}' 00:00:04.951 + sudo kill -9 00:00:04.951 + true 00:00:04.965 [Pipeline] cleanWs 00:00:04.974 [WS-CLEANUP] Deleting project workspace... 00:00:04.974 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.980 [WS-CLEANUP] done 00:00:04.984 [Pipeline] setCustomBuildProperty 00:00:04.996 [Pipeline] sh 00:00:05.274 + sudo git config --global --replace-all safe.directory '*' 00:00:05.358 [Pipeline] httpRequest 00:00:05.383 [Pipeline] echo 00:00:05.384 Sorcerer 10.211.164.101 is alive 00:00:05.392 [Pipeline] httpRequest 00:00:05.397 HttpMethod: GET 00:00:05.397 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.398 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.409 Response Code: HTTP/1.1 200 OK 00:00:05.410 Success: Status code 200 is in the accepted range: 200,404 00:00:05.410 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.685 [Pipeline] sh 00:00:06.965 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.982 [Pipeline] httpRequest 00:00:07.008 [Pipeline] echo 00:00:07.009 Sorcerer 10.211.164.101 is alive 00:00:07.014 [Pipeline] httpRequest 00:00:07.018 HttpMethod: GET 00:00:07.019 URL: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:07.020 Sending request to url: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:07.031 Response Code: HTTP/1.1 200 OK 00:00:07.032 Success: Status code 200 is in the accepted range: 200,404 00:00:07.033 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:40.826 [Pipeline] sh 00:00:41.115 + tar --no-same-owner -xf spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:44.414 [Pipeline] sh 00:00:44.722 + git -C spdk log --oneline -n5 00:00:44.722 72fc6988f nvmf: add nvmf_update_mdns_prr 00:00:44.722 97f71d59d nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:00:44.722 719d03c6a sock/uring: only register net impl if supported 00:00:44.722 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:44.722 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:44.736 [Pipeline] } 00:00:44.758 [Pipeline] // stage 00:00:44.768 [Pipeline] stage 00:00:44.771 [Pipeline] { (Prepare) 00:00:44.793 [Pipeline] writeFile 00:00:44.811 [Pipeline] sh 00:00:45.105 + logger -p user.info -t JENKINS-CI 00:00:45.149 [Pipeline] sh 00:00:45.432 + logger -p user.info -t JENKINS-CI 00:00:45.445 [Pipeline] sh 00:00:45.731 + cat autorun-spdk.conf 00:00:45.731 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.731 SPDK_TEST_NVMF=1 00:00:45.731 SPDK_TEST_NVME_CLI=1 00:00:45.731 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.731 SPDK_TEST_NVMF_NICS=e810 00:00:45.731 SPDK_TEST_VFIOUSER=1 00:00:45.731 SPDK_RUN_UBSAN=1 00:00:45.731 NET_TYPE=phy 00:00:45.739 RUN_NIGHTLY=0 00:00:45.745 [Pipeline] readFile 00:00:45.774 [Pipeline] withEnv 00:00:45.776 [Pipeline] { 00:00:45.791 [Pipeline] sh 00:00:46.077 + set -ex 00:00:46.077 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:46.077 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.077 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.077 ++ SPDK_TEST_NVMF=1 00:00:46.077 ++ SPDK_TEST_NVME_CLI=1 00:00:46.077 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.077 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.077 ++ SPDK_TEST_VFIOUSER=1 00:00:46.077 ++ SPDK_RUN_UBSAN=1 00:00:46.077 ++ NET_TYPE=phy 00:00:46.077 ++ RUN_NIGHTLY=0 00:00:46.077 + case $SPDK_TEST_NVMF_NICS in 00:00:46.077 + DRIVERS=ice 00:00:46.077 + [[ tcp == \r\d\m\a ]] 00:00:46.077 + [[ -n ice ]] 00:00:46.077 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:46.077 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.077 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:46.077 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.077 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.077 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.077 + true 00:00:46.077 + for D in $DRIVERS 00:00:46.077 + sudo modprobe ice 00:00:46.077 + exit 0 00:00:46.087 [Pipeline] } 00:00:46.107 [Pipeline] // withEnv 00:00:46.113 [Pipeline] } 00:00:46.126 [Pipeline] // stage 00:00:46.135 [Pipeline] catchError 00:00:46.137 [Pipeline] { 00:00:46.151 [Pipeline] timeout 00:00:46.152 Timeout set to expire in 50 min 00:00:46.153 [Pipeline] { 00:00:46.170 [Pipeline] stage 00:00:46.172 [Pipeline] { (Tests) 00:00:46.189 [Pipeline] sh 00:00:46.471 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.471 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.471 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.471 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.471 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.471 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.471 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.471 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.471 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.471 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.471 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:46.471 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.471 + source /etc/os-release 00:00:46.471 ++ NAME='Fedora Linux' 00:00:46.471 ++ VERSION='38 (Cloud Edition)' 00:00:46.471 ++ ID=fedora 00:00:46.471 ++ VERSION_ID=38 00:00:46.471 ++ VERSION_CODENAME= 00:00:46.471 ++ PLATFORM_ID=platform:f38 00:00:46.471 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.471 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.471 ++ LOGO=fedora-logo-icon 00:00:46.471 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.471 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.471 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.471 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.471 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.471 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.471 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.471 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.471 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.471 ++ SUPPORT_END=2024-05-14 00:00:46.471 ++ VARIANT='Cloud Edition' 00:00:46.471 ++ VARIANT_ID=cloud 00:00:46.471 + uname -a 00:00:46.471 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.471 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:47.408 Hugepages 00:00:47.408 node hugesize free / total 00:00:47.408 node0 1048576kB 0 / 0 00:00:47.408 node0 2048kB 0 / 0 00:00:47.408 node1 1048576kB 0 / 0 00:00:47.408 node1 2048kB 0 / 0 00:00:47.408 00:00:47.408 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:47.408 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:47.408 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:47.408 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:47.408 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:47.408 + rm -f /tmp/spdk-ld-path 00:00:47.408 + source autorun-spdk.conf 00:00:47.408 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.408 ++ SPDK_TEST_NVMF=1 00:00:47.408 ++ SPDK_TEST_NVME_CLI=1 00:00:47.408 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.408 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.408 ++ SPDK_TEST_VFIOUSER=1 00:00:47.408 ++ SPDK_RUN_UBSAN=1 00:00:47.408 ++ NET_TYPE=phy 00:00:47.408 ++ RUN_NIGHTLY=0 00:00:47.408 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:47.408 + [[ -n '' ]] 00:00:47.408 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.408 + for M in /var/spdk/build-*-manifest.txt 00:00:47.408 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:47.408 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.408 + for M in /var/spdk/build-*-manifest.txt 00:00:47.408 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:47.408 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.408 ++ uname 00:00:47.408 + [[ Linux == \L\i\n\u\x ]] 00:00:47.408 + sudo dmesg -T 00:00:47.667 + sudo dmesg --clear 00:00:47.667 + dmesg_pid=1303479 00:00:47.667 + [[ Fedora Linux == FreeBSD ]] 00:00:47.667 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.667 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.667 + sudo dmesg -Tw 00:00:47.667 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:47.667 + [[ -x /usr/src/fio-static/fio ]] 00:00:47.667 + export FIO_BIN=/usr/src/fio-static/fio 00:00:47.667 + FIO_BIN=/usr/src/fio-static/fio 00:00:47.667 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:47.667 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:47.667 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:47.667 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.667 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.667 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:47.667 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.667 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.667 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.667 Test configuration: 00:00:47.667 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.667 SPDK_TEST_NVMF=1 00:00:47.667 SPDK_TEST_NVME_CLI=1 00:00:47.667 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.667 SPDK_TEST_NVMF_NICS=e810 00:00:47.667 SPDK_TEST_VFIOUSER=1 00:00:47.667 SPDK_RUN_UBSAN=1 00:00:47.667 NET_TYPE=phy 00:00:47.668 RUN_NIGHTLY=0 16:17:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:47.668 16:17:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:47.668 16:17:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:47.668 16:17:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:47.668 16:17:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.668 16:17:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.668 16:17:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.668 16:17:27 -- paths/export.sh@5 -- $ export PATH 00:00:47.668 16:17:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.668 16:17:27 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:47.668 16:17:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:47.668 16:17:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053047.XXXXXX 00:00:47.668 16:17:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053047.QmZuyo 00:00:47.668 16:17:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:47.668 16:17:27 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:47.668 16:17:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:47.668 16:17:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:47.668 16:17:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:47.668 16:17:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:47.668 16:17:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:47.668 16:17:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.668 16:17:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:47.668 16:17:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:47.668 16:17:27 -- pm/common@17 -- $ local monitor 00:00:47.668 16:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.668 16:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.668 16:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.668 16:17:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.668 16:17:27 -- pm/common@21 -- $ date +%s 00:00:47.668 16:17:27 -- pm/common@21 -- $ date +%s 00:00:47.668 16:17:27 -- pm/common@25 -- $ sleep 1 00:00:47.668 16:17:27 -- pm/common@21 -- $ date +%s 00:00:47.668 16:17:27 -- pm/common@21 -- $ date +%s 00:00:47.668 16:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053047 00:00:47.668 16:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053047 00:00:47.668 16:17:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053047 00:00:47.668 16:17:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721053047 00:00:47.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053047_collect-vmstat.pm.log 00:00:47.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053047_collect-cpu-load.pm.log 00:00:47.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053047_collect-cpu-temp.pm.log 00:00:47.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721053047_collect-bmc-pm.bmc.pm.log 00:00:48.607 16:17:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:48.607 16:17:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:48.607 16:17:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:48.607 16:17:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.607 16:17:28 -- spdk/autobuild.sh@16 -- $ date -u 00:00:48.607 Mon Jul 15 02:17:28 PM UTC 2024 00:00:48.607 16:17:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:48.607 v24.09-pre-204-g72fc6988f 00:00:48.607 16:17:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:48.607 16:17:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:48.607 16:17:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:48.607 16:17:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:48.607 16:17:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:48.607 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.607 ************************************ 00:00:48.607 START TEST ubsan 00:00:48.607 ************************************ 00:00:48.607 16:17:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:48.607 using ubsan 00:00:48.607 00:00:48.607 real 0m0.000s 00:00:48.607 user 0m0.000s 00:00:48.607 sys 0m0.000s 00:00:48.607 16:17:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:48.607 16:17:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:48.607 ************************************ 00:00:48.607 END TEST ubsan 00:00:48.607 ************************************ 00:00:48.607 16:17:28 -- common/autotest_common.sh@1142 -- $ return 0 00:00:48.607 16:17:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:48.607 16:17:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:48.607 16:17:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:48.607 16:17:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:48.865 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:48.865 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:49.124 Using 'verbs' RDMA provider 00:00:59.679 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.663 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.663 Creating mk/config.mk...done. 00:01:09.663 Creating mk/cc.flags.mk...done. 00:01:09.663 Type 'make' to build. 00:01:09.663 16:17:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:09.663 16:17:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.663 16:17:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.663 16:17:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.663 ************************************ 00:01:09.663 START TEST make 00:01:09.663 ************************************ 00:01:09.663 16:17:48 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:09.663 make[1]: Nothing to be done for 'all'. 00:01:11.054 The Meson build system 00:01:11.054 Version: 1.3.1 00:01:11.054 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:11.054 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:11.054 Build type: native build 00:01:11.054 Project name: libvfio-user 00:01:11.054 Project version: 0.0.1 00:01:11.054 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:11.054 C linker for the host machine: cc ld.bfd 2.39-16 00:01:11.054 Host machine cpu family: x86_64 00:01:11.054 Host machine cpu: x86_64 00:01:11.054 Run-time dependency threads found: YES 00:01:11.054 Library dl found: YES 00:01:11.054 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:11.054 Run-time dependency json-c found: YES 0.17 00:01:11.054 Run-time dependency cmocka found: YES 1.1.7 00:01:11.054 Program pytest-3 found: NO 00:01:11.054 Program flake8 found: NO 00:01:11.054 Program misspell-fixer found: NO 00:01:11.054 Program restructuredtext-lint found: NO 00:01:11.054 Program valgrind found: YES (/usr/bin/valgrind) 00:01:11.054 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.054 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.054 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.054 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:11.054 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:11.054 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:11.054 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:11.054 Build targets in project: 8 00:01:11.054 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:11.054 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:11.054 00:01:11.054 libvfio-user 0.0.1 00:01:11.054 00:01:11.054 User defined options 00:01:11.054 buildtype : debug 00:01:11.054 default_library: shared 00:01:11.054 libdir : /usr/local/lib 00:01:11.054 00:01:11.054 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.624 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:11.885 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:11.885 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:11.885 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:11.885 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:11.885 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:11.885 [6/37] Compiling C object samples/null.p/null.c.o 00:01:11.885 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:12.145 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:12.145 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:12.145 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:12.145 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:12.145 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:12.145 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:12.145 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:12.145 [15/37] Compiling C object samples/server.p/server.c.o 00:01:12.145 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:12.145 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:12.145 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:12.145 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:12.145 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:12.145 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:12.145 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:12.145 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:12.145 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:12.145 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:12.145 [26/37] Compiling C object samples/client.p/client.c.o 00:01:12.145 [27/37] Linking target samples/client 00:01:12.407 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:12.407 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:12.407 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:12.407 [31/37] Linking target test/unit_tests 00:01:12.670 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:12.671 [33/37] Linking target samples/server 00:01:12.671 [34/37] Linking target samples/lspci 00:01:12.671 [35/37] Linking target samples/null 00:01:12.671 [36/37] Linking target samples/gpio-pci-idio-16 00:01:12.671 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:12.671 INFO: autodetecting backend as ninja 00:01:12.671 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.671 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.247 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.247 ninja: no work to do. 00:01:18.541 The Meson build system 00:01:18.541 Version: 1.3.1 00:01:18.541 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:18.541 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:18.541 Build type: native build 00:01:18.541 Program cat found: YES (/usr/bin/cat) 00:01:18.541 Project name: DPDK 00:01:18.541 Project version: 24.03.0 00:01:18.541 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:18.541 C linker for the host machine: cc ld.bfd 2.39-16 00:01:18.541 Host machine cpu family: x86_64 00:01:18.541 Host machine cpu: x86_64 00:01:18.541 Message: ## Building in Developer Mode ## 00:01:18.541 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:18.541 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:18.541 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:18.541 Program python3 found: YES (/usr/bin/python3) 00:01:18.541 Program cat found: YES (/usr/bin/cat) 00:01:18.541 Compiler for C supports arguments -march=native: YES 00:01:18.541 Checking for size of "void *" : 8 00:01:18.541 Checking for size of "void *" : 8 (cached) 00:01:18.541 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:18.541 Library m found: YES 00:01:18.541 Library numa found: YES 00:01:18.541 Has header "numaif.h" : YES 00:01:18.541 Library fdt found: NO 00:01:18.541 Library execinfo found: NO 00:01:18.541 Has header "execinfo.h" : YES 00:01:18.541 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:18.541 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:18.541 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:18.541 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:18.541 Run-time dependency openssl found: YES 3.0.9 00:01:18.541 Run-time dependency libpcap found: YES 1.10.4 00:01:18.541 Has header "pcap.h" with dependency libpcap: YES 00:01:18.541 Compiler for C supports arguments -Wcast-qual: YES 00:01:18.541 Compiler for C supports arguments -Wdeprecated: YES 00:01:18.541 Compiler for C supports arguments -Wformat: YES 00:01:18.541 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:18.541 Compiler for C supports arguments -Wformat-security: NO 00:01:18.541 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:18.541 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:18.541 Compiler for C supports arguments -Wnested-externs: YES 00:01:18.541 Compiler for C supports arguments -Wold-style-definition: YES 00:01:18.541 Compiler for C supports arguments -Wpointer-arith: YES 00:01:18.541 Compiler for C supports arguments -Wsign-compare: YES 00:01:18.541 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:18.541 Compiler for C supports arguments -Wundef: YES 00:01:18.541 Compiler for C supports arguments -Wwrite-strings: YES 00:01:18.541 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:18.541 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:18.541 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:18.541 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:18.541 Program objdump found: YES (/usr/bin/objdump) 00:01:18.541 Compiler for C supports arguments -mavx512f: YES 00:01:18.541 Checking if "AVX512 checking" compiles: YES 00:01:18.541 Fetching value of define "__SSE4_2__" : 1 00:01:18.541 Fetching value of define "__AES__" : 1 00:01:18.541 Fetching value of define "__AVX__" : 1 00:01:18.541 Fetching value of define "__AVX2__" : (undefined) 00:01:18.541 Fetching value of define "__AVX512BW__" : (undefined) 00:01:18.541 Fetching value of define "__AVX512CD__" : (undefined) 00:01:18.541 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:18.541 Fetching value of define "__AVX512F__" : (undefined) 00:01:18.541 Fetching value of define "__AVX512VL__" : (undefined) 00:01:18.541 Fetching value of define "__PCLMUL__" : 1 00:01:18.541 Fetching value of define "__RDRND__" : 1 00:01:18.541 Fetching value of define "__RDSEED__" : (undefined) 00:01:18.541 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:18.541 Fetching value of define "__znver1__" : (undefined) 00:01:18.541 Fetching value of define "__znver2__" : (undefined) 00:01:18.541 Fetching value of define "__znver3__" : (undefined) 00:01:18.541 Fetching value of define "__znver4__" : (undefined) 00:01:18.541 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:18.541 Message: lib/log: Defining dependency "log" 00:01:18.541 Message: lib/kvargs: Defining dependency "kvargs" 00:01:18.541 Message: lib/telemetry: Defining dependency "telemetry" 00:01:18.541 Checking for function "getentropy" : NO 00:01:18.541 Message: lib/eal: Defining dependency "eal" 00:01:18.541 Message: lib/ring: Defining dependency "ring" 00:01:18.541 Message: lib/rcu: Defining dependency "rcu" 00:01:18.541 Message: lib/mempool: Defining dependency "mempool" 00:01:18.541 Message: lib/mbuf: Defining dependency "mbuf" 00:01:18.541 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:18.541 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:18.541 Compiler for C supports arguments -mpclmul: YES 00:01:18.541 Compiler for C supports arguments -maes: YES 00:01:18.541 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.541 Compiler for C supports arguments -mavx512bw: YES 00:01:18.541 Compiler for C supports arguments -mavx512dq: YES 00:01:18.541 Compiler for C supports arguments -mavx512vl: YES 00:01:18.541 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:18.541 Compiler for C supports arguments -mavx2: YES 00:01:18.541 Compiler for C supports arguments -mavx: YES 00:01:18.541 Message: lib/net: Defining dependency "net" 00:01:18.541 Message: lib/meter: Defining dependency "meter" 00:01:18.541 Message: lib/ethdev: Defining dependency "ethdev" 00:01:18.541 Message: lib/pci: Defining dependency "pci" 00:01:18.541 Message: lib/cmdline: Defining dependency "cmdline" 00:01:18.541 Message: lib/hash: Defining dependency "hash" 00:01:18.541 Message: lib/timer: Defining dependency "timer" 00:01:18.541 Message: lib/compressdev: Defining dependency "compressdev" 00:01:18.541 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:18.541 Message: lib/dmadev: Defining dependency "dmadev" 00:01:18.541 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:18.541 Message: lib/power: Defining dependency "power" 00:01:18.541 Message: lib/reorder: Defining dependency "reorder" 00:01:18.541 Message: lib/security: Defining dependency "security" 00:01:18.541 Has header "linux/userfaultfd.h" : YES 00:01:18.541 Has header "linux/vduse.h" : YES 00:01:18.541 Message: lib/vhost: Defining dependency "vhost" 00:01:18.541 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:18.541 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:18.541 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:18.541 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:18.541 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:18.541 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:18.541 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:18.541 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:18.541 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:18.541 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:18.541 Program doxygen found: YES (/usr/bin/doxygen) 00:01:18.541 Configuring doxy-api-html.conf using configuration 00:01:18.541 Configuring doxy-api-man.conf using configuration 00:01:18.541 Program mandb found: YES (/usr/bin/mandb) 00:01:18.541 Program sphinx-build found: NO 00:01:18.541 Configuring rte_build_config.h using configuration 00:01:18.541 Message: 00:01:18.541 ================= 00:01:18.541 Applications Enabled 00:01:18.541 ================= 00:01:18.541 00:01:18.541 apps: 00:01:18.541 00:01:18.541 00:01:18.541 Message: 00:01:18.541 ================= 00:01:18.541 Libraries Enabled 00:01:18.541 ================= 00:01:18.541 00:01:18.541 libs: 00:01:18.542 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:18.542 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:18.542 cryptodev, dmadev, power, reorder, security, vhost, 00:01:18.542 00:01:18.542 Message: 00:01:18.542 =============== 00:01:18.542 Drivers Enabled 00:01:18.542 =============== 00:01:18.542 00:01:18.542 common: 00:01:18.542 00:01:18.542 bus: 00:01:18.542 pci, vdev, 00:01:18.542 mempool: 00:01:18.542 ring, 00:01:18.542 dma: 00:01:18.542 00:01:18.542 net: 00:01:18.542 00:01:18.542 crypto: 00:01:18.542 00:01:18.542 compress: 00:01:18.542 00:01:18.542 vdpa: 00:01:18.542 00:01:18.542 00:01:18.542 Message: 00:01:18.542 ================= 00:01:18.542 Content Skipped 00:01:18.542 ================= 00:01:18.542 00:01:18.542 apps: 00:01:18.542 dumpcap: explicitly disabled via build config 00:01:18.542 graph: explicitly disabled via build config 00:01:18.542 pdump: explicitly disabled via build config 00:01:18.542 proc-info: explicitly disabled via build config 00:01:18.542 test-acl: explicitly disabled via build config 00:01:18.542 test-bbdev: explicitly disabled via build config 00:01:18.542 test-cmdline: explicitly disabled via build config 00:01:18.542 test-compress-perf: explicitly disabled via build config 00:01:18.542 test-crypto-perf: explicitly disabled via build config 00:01:18.542 test-dma-perf: explicitly disabled via build config 00:01:18.542 test-eventdev: explicitly disabled via build config 00:01:18.542 test-fib: explicitly disabled via build config 00:01:18.542 test-flow-perf: explicitly disabled via build config 00:01:18.542 test-gpudev: explicitly disabled via build config 00:01:18.542 test-mldev: explicitly disabled via build config 00:01:18.542 test-pipeline: explicitly disabled via build config 00:01:18.542 test-pmd: explicitly disabled via build config 00:01:18.542 test-regex: explicitly disabled via build config 00:01:18.542 test-sad: explicitly disabled via build config 00:01:18.542 test-security-perf: explicitly disabled via build config 00:01:18.542 00:01:18.542 libs: 00:01:18.542 argparse: explicitly disabled via build config 00:01:18.542 metrics: explicitly disabled via build config 00:01:18.542 acl: explicitly disabled via build config 00:01:18.542 bbdev: explicitly disabled via build config 00:01:18.542 bitratestats: explicitly disabled via build config 00:01:18.542 bpf: explicitly disabled via build config 00:01:18.542 cfgfile: explicitly disabled via build config 00:01:18.542 distributor: explicitly disabled via build config 00:01:18.542 efd: explicitly disabled via build config 00:01:18.542 eventdev: explicitly disabled via build config 00:01:18.542 dispatcher: explicitly disabled via build config 00:01:18.542 gpudev: explicitly disabled via build config 00:01:18.542 gro: explicitly disabled via build config 00:01:18.542 gso: explicitly disabled via build config 00:01:18.542 ip_frag: explicitly disabled via build config 00:01:18.542 jobstats: explicitly disabled via build config 00:01:18.542 latencystats: explicitly disabled via build config 00:01:18.542 lpm: explicitly disabled via build config 00:01:18.542 member: explicitly disabled via build config 00:01:18.542 pcapng: explicitly disabled via build config 00:01:18.542 rawdev: explicitly disabled via build config 00:01:18.542 regexdev: explicitly disabled via build config 00:01:18.542 mldev: explicitly disabled via build config 00:01:18.542 rib: explicitly disabled via build config 00:01:18.542 sched: explicitly disabled via build config 00:01:18.542 stack: explicitly disabled via build config 00:01:18.542 ipsec: explicitly disabled via build config 00:01:18.542 pdcp: explicitly disabled via build config 00:01:18.542 fib: explicitly disabled via build config 00:01:18.542 port: explicitly disabled via build config 00:01:18.542 pdump: explicitly disabled via build config 00:01:18.542 table: explicitly disabled via build config 00:01:18.542 pipeline: explicitly disabled via build config 00:01:18.542 graph: explicitly disabled via build config 00:01:18.542 node: explicitly disabled via build config 00:01:18.542 00:01:18.542 drivers: 00:01:18.542 common/cpt: not in enabled drivers build config 00:01:18.542 common/dpaax: not in enabled drivers build config 00:01:18.542 common/iavf: not in enabled drivers build config 00:01:18.542 common/idpf: not in enabled drivers build config 00:01:18.542 common/ionic: not in enabled drivers build config 00:01:18.542 common/mvep: not in enabled drivers build config 00:01:18.542 common/octeontx: not in enabled drivers build config 00:01:18.542 bus/auxiliary: not in enabled drivers build config 00:01:18.542 bus/cdx: not in enabled drivers build config 00:01:18.542 bus/dpaa: not in enabled drivers build config 00:01:18.542 bus/fslmc: not in enabled drivers build config 00:01:18.542 bus/ifpga: not in enabled drivers build config 00:01:18.542 bus/platform: not in enabled drivers build config 00:01:18.542 bus/uacce: not in enabled drivers build config 00:01:18.542 bus/vmbus: not in enabled drivers build config 00:01:18.542 common/cnxk: not in enabled drivers build config 00:01:18.542 common/mlx5: not in enabled drivers build config 00:01:18.542 common/nfp: not in enabled drivers build config 00:01:18.542 common/nitrox: not in enabled drivers build config 00:01:18.542 common/qat: not in enabled drivers build config 00:01:18.542 common/sfc_efx: not in enabled drivers build config 00:01:18.542 mempool/bucket: not in enabled drivers build config 00:01:18.542 mempool/cnxk: not in enabled drivers build config 00:01:18.542 mempool/dpaa: not in enabled drivers build config 00:01:18.542 mempool/dpaa2: not in enabled drivers build config 00:01:18.542 mempool/octeontx: not in enabled drivers build config 00:01:18.542 mempool/stack: not in enabled drivers build config 00:01:18.542 dma/cnxk: not in enabled drivers build config 00:01:18.542 dma/dpaa: not in enabled drivers build config 00:01:18.542 dma/dpaa2: not in enabled drivers build config 00:01:18.542 dma/hisilicon: not in enabled drivers build config 00:01:18.542 dma/idxd: not in enabled drivers build config 00:01:18.542 dma/ioat: not in enabled drivers build config 00:01:18.542 dma/skeleton: not in enabled drivers build config 00:01:18.542 net/af_packet: not in enabled drivers build config 00:01:18.542 net/af_xdp: not in enabled drivers build config 00:01:18.542 net/ark: not in enabled drivers build config 00:01:18.542 net/atlantic: not in enabled drivers build config 00:01:18.542 net/avp: not in enabled drivers build config 00:01:18.542 net/axgbe: not in enabled drivers build config 00:01:18.542 net/bnx2x: not in enabled drivers build config 00:01:18.542 net/bnxt: not in enabled drivers build config 00:01:18.542 net/bonding: not in enabled drivers build config 00:01:18.542 net/cnxk: not in enabled drivers build config 00:01:18.542 net/cpfl: not in enabled drivers build config 00:01:18.542 net/cxgbe: not in enabled drivers build config 00:01:18.542 net/dpaa: not in enabled drivers build config 00:01:18.542 net/dpaa2: not in enabled drivers build config 00:01:18.542 net/e1000: not in enabled drivers build config 00:01:18.542 net/ena: not in enabled drivers build config 00:01:18.542 net/enetc: not in enabled drivers build config 00:01:18.542 net/enetfec: not in enabled drivers build config 00:01:18.542 net/enic: not in enabled drivers build config 00:01:18.542 net/failsafe: not in enabled drivers build config 00:01:18.542 net/fm10k: not in enabled drivers build config 00:01:18.542 net/gve: not in enabled drivers build config 00:01:18.542 net/hinic: not in enabled drivers build config 00:01:18.542 net/hns3: not in enabled drivers build config 00:01:18.542 net/i40e: not in enabled drivers build config 00:01:18.542 net/iavf: not in enabled drivers build config 00:01:18.542 net/ice: not in enabled drivers build config 00:01:18.542 net/idpf: not in enabled drivers build config 00:01:18.542 net/igc: not in enabled drivers build config 00:01:18.542 net/ionic: not in enabled drivers build config 00:01:18.542 net/ipn3ke: not in enabled drivers build config 00:01:18.542 net/ixgbe: not in enabled drivers build config 00:01:18.542 net/mana: not in enabled drivers build config 00:01:18.542 net/memif: not in enabled drivers build config 00:01:18.542 net/mlx4: not in enabled drivers build config 00:01:18.542 net/mlx5: not in enabled drivers build config 00:01:18.542 net/mvneta: not in enabled drivers build config 00:01:18.542 net/mvpp2: not in enabled drivers build config 00:01:18.542 net/netvsc: not in enabled drivers build config 00:01:18.542 net/nfb: not in enabled drivers build config 00:01:18.542 net/nfp: not in enabled drivers build config 00:01:18.542 net/ngbe: not in enabled drivers build config 00:01:18.542 net/null: not in enabled drivers build config 00:01:18.542 net/octeontx: not in enabled drivers build config 00:01:18.542 net/octeon_ep: not in enabled drivers build config 00:01:18.542 net/pcap: not in enabled drivers build config 00:01:18.542 net/pfe: not in enabled drivers build config 00:01:18.542 net/qede: not in enabled drivers build config 00:01:18.542 net/ring: not in enabled drivers build config 00:01:18.542 net/sfc: not in enabled drivers build config 00:01:18.542 net/softnic: not in enabled drivers build config 00:01:18.542 net/tap: not in enabled drivers build config 00:01:18.542 net/thunderx: not in enabled drivers build config 00:01:18.542 net/txgbe: not in enabled drivers build config 00:01:18.542 net/vdev_netvsc: not in enabled drivers build config 00:01:18.542 net/vhost: not in enabled drivers build config 00:01:18.542 net/virtio: not in enabled drivers build config 00:01:18.542 net/vmxnet3: not in enabled drivers build config 00:01:18.542 raw/*: missing internal dependency, "rawdev" 00:01:18.542 crypto/armv8: not in enabled drivers build config 00:01:18.542 crypto/bcmfs: not in enabled drivers build config 00:01:18.542 crypto/caam_jr: not in enabled drivers build config 00:01:18.542 crypto/ccp: not in enabled drivers build config 00:01:18.542 crypto/cnxk: not in enabled drivers build config 00:01:18.542 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.542 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.542 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.542 crypto/mlx5: not in enabled drivers build config 00:01:18.542 crypto/mvsam: not in enabled drivers build config 00:01:18.542 crypto/nitrox: not in enabled drivers build config 00:01:18.542 crypto/null: not in enabled drivers build config 00:01:18.542 crypto/octeontx: not in enabled drivers build config 00:01:18.542 crypto/openssl: not in enabled drivers build config 00:01:18.542 crypto/scheduler: not in enabled drivers build config 00:01:18.542 crypto/uadk: not in enabled drivers build config 00:01:18.542 crypto/virtio: not in enabled drivers build config 00:01:18.542 compress/isal: not in enabled drivers build config 00:01:18.542 compress/mlx5: not in enabled drivers build config 00:01:18.542 compress/nitrox: not in enabled drivers build config 00:01:18.542 compress/octeontx: not in enabled drivers build config 00:01:18.542 compress/zlib: not in enabled drivers build config 00:01:18.542 regex/*: missing internal dependency, "regexdev" 00:01:18.542 ml/*: missing internal dependency, "mldev" 00:01:18.542 vdpa/ifc: not in enabled drivers build config 00:01:18.543 vdpa/mlx5: not in enabled drivers build config 00:01:18.543 vdpa/nfp: not in enabled drivers build config 00:01:18.543 vdpa/sfc: not in enabled drivers build config 00:01:18.543 event/*: missing internal dependency, "eventdev" 00:01:18.543 baseband/*: missing internal dependency, "bbdev" 00:01:18.543 gpu/*: missing internal dependency, "gpudev" 00:01:18.543 00:01:18.543 00:01:18.543 Build targets in project: 85 00:01:18.543 00:01:18.543 DPDK 24.03.0 00:01:18.543 00:01:18.543 User defined options 00:01:18.543 buildtype : debug 00:01:18.543 default_library : shared 00:01:18.543 libdir : lib 00:01:18.543 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.543 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:18.543 c_link_args : 00:01:18.543 cpu_instruction_set: native 00:01:18.543 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:18.543 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:18.543 enable_docs : false 00:01:18.543 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:18.543 enable_kmods : false 00:01:18.543 max_lcores : 128 00:01:18.543 tests : false 00:01:18.543 00:01:18.543 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.543 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:18.802 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:18.802 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:18.802 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:18.802 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:18.802 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:18.802 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:18.802 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:18.802 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:18.802 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:18.802 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:18.802 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:18.802 [12/268] Linking static target lib/librte_kvargs.a 00:01:18.802 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:18.802 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:18.802 [15/268] Linking static target lib/librte_log.a 00:01:18.802 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:19.376 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.639 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.639 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.639 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.639 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.639 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.639 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.639 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.639 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.639 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.639 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.639 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.639 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:19.639 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.639 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.639 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.639 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.639 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.639 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.639 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.639 [37/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.639 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.639 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.639 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.639 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.639 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.639 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.639 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.639 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.639 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.639 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.639 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.639 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.639 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.639 [51/268] Linking static target lib/librte_telemetry.a 00:01:19.639 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.639 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.639 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.639 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.904 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.904 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.904 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.904 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.904 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.904 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:19.904 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.904 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.904 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.904 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.904 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.904 [67/268] Linking target lib/librte_log.so.24.1 00:01:20.166 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.166 [69/268] Linking static target lib/librte_pci.a 00:01:20.166 [70/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:20.166 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:20.426 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:20.426 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:20.426 [74/268] Linking target lib/librte_kvargs.so.24.1 00:01:20.426 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:20.426 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:20.426 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:20.426 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:20.426 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:20.426 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:20.688 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:20.688 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:20.688 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:20.688 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:20.688 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:20.688 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.688 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:20.688 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:20.688 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:20.688 [90/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.688 [91/268] Linking static target lib/librte_ring.a 00:01:20.688 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:20.688 [93/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.688 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:20.688 [95/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:20.688 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:20.688 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.688 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.688 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:20.688 [100/268] Linking static target lib/librte_meter.a 00:01:20.688 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:20.688 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.688 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:20.688 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.688 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:20.688 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.688 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:20.688 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:20.688 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:20.950 [110/268] Linking target lib/librte_telemetry.so.24.1 00:01:20.950 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:20.950 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.950 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:20.950 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.950 [115/268] Linking static target lib/librte_mempool.a 00:01:20.950 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:20.950 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:20.950 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:20.950 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:20.950 [120/268] Linking static target lib/librte_rcu.a 00:01:20.950 [121/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:20.950 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:20.950 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.950 [124/268] Linking static target lib/librte_eal.a 00:01:20.950 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.950 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.950 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:20.950 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:20.950 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:21.210 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:21.210 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:21.210 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:21.210 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:21.210 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.210 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:21.210 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:21.476 [137/268] Linking static target lib/librte_net.a 00:01:21.476 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:21.476 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.476 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:21.476 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:21.476 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:21.476 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:21.476 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:21.476 [145/268] Linking static target lib/librte_cmdline.a 00:01:21.476 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:21.476 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.736 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:21.736 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:21.736 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:21.736 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:21.736 [152/268] Linking static target lib/librte_timer.a 00:01:21.736 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:21.736 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:21.736 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:21.736 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:21.736 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:21.736 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.736 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:21.995 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:21.995 [161/268] Linking static target lib/librte_dmadev.a 00:01:21.995 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:21.995 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:21.995 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:21.995 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.995 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:21.995 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.995 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:21.996 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:21.996 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:21.996 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:21.996 [172/268] Linking static target lib/librte_power.a 00:01:22.254 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.255 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:22.255 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.255 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.255 [177/268] Linking static target lib/librte_compressdev.a 00:01:22.255 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.255 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.255 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:22.255 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.255 [182/268] Linking static target lib/librte_hash.a 00:01:22.255 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.255 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.255 [185/268] Linking static target lib/librte_reorder.a 00:01:22.255 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.255 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.255 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.513 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.513 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.513 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.513 [192/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.513 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.513 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.513 [195/268] Linking static target lib/librte_security.a 00:01:22.513 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.513 [197/268] Linking static target lib/librte_mbuf.a 00:01:22.513 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.513 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.513 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.513 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.513 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.513 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:22.513 [204/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.513 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.513 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.513 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.772 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.772 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.772 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:22.772 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:22.772 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.772 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.772 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.772 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.772 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.029 [217/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.029 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.029 [219/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:23.029 [220/268] Linking static target lib/librte_cryptodev.a 00:01:23.029 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.029 [222/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:23.029 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:23.029 [224/268] Linking static target lib/librte_ethdev.a 00:01:23.029 [225/268] Linking static target drivers/librte_mempool_ring.a 00:01:23.029 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.962 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.337 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:27.278 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.278 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.278 [231/268] Linking target lib/librte_eal.so.24.1 00:01:27.537 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:27.537 [233/268] Linking target lib/librte_ring.so.24.1 00:01:27.537 [234/268] Linking target lib/librte_meter.so.24.1 00:01:27.537 [235/268] Linking target lib/librte_pci.so.24.1 00:01:27.537 [236/268] Linking target lib/librte_timer.so.24.1 00:01:27.537 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:27.537 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:27.537 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:27.537 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:27.537 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:27.537 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:27.537 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:27.537 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:27.537 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:27.537 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:27.795 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:27.795 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:27.795 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:27.795 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:27.795 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:28.054 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:28.054 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:28.054 [254/268] Linking target lib/librte_net.so.24.1 00:01:28.054 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:28.054 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:28.054 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:28.054 [258/268] Linking target lib/librte_security.so.24.1 00:01:28.054 [259/268] Linking target lib/librte_hash.so.24.1 00:01:28.054 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:28.054 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:28.313 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:28.313 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:28.313 [264/268] Linking target lib/librte_power.so.24.1 00:01:30.844 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:30.844 [266/268] Linking static target lib/librte_vhost.a 00:01:31.779 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.779 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:31.779 INFO: autodetecting backend as ninja 00:01:31.779 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:32.713 CC lib/ut_mock/mock.o 00:01:32.713 CC lib/log/log.o 00:01:32.713 CC lib/ut/ut.o 00:01:32.713 CC lib/log/log_flags.o 00:01:32.713 CC lib/log/log_deprecated.o 00:01:32.971 LIB libspdk_log.a 00:01:32.971 LIB libspdk_ut.a 00:01:32.971 LIB libspdk_ut_mock.a 00:01:32.971 SO libspdk_ut_mock.so.6.0 00:01:32.971 SO libspdk_ut.so.2.0 00:01:32.971 SO libspdk_log.so.7.0 00:01:32.971 SYMLINK libspdk_ut_mock.so 00:01:32.971 SYMLINK libspdk_ut.so 00:01:32.971 SYMLINK libspdk_log.so 00:01:33.230 CC lib/dma/dma.o 00:01:33.230 CC lib/ioat/ioat.o 00:01:33.230 CXX lib/trace_parser/trace.o 00:01:33.230 CC lib/util/base64.o 00:01:33.230 CC lib/util/bit_array.o 00:01:33.230 CC lib/util/cpuset.o 00:01:33.230 CC lib/util/crc16.o 00:01:33.230 CC lib/util/crc32.o 00:01:33.230 CC lib/util/crc32c.o 00:01:33.230 CC lib/util/crc32_ieee.o 00:01:33.230 CC lib/util/crc64.o 00:01:33.230 CC lib/util/dif.o 00:01:33.230 CC lib/util/fd.o 00:01:33.230 CC lib/util/file.o 00:01:33.230 CC lib/util/hexlify.o 00:01:33.230 CC lib/util/iov.o 00:01:33.230 CC lib/util/math.o 00:01:33.230 CC lib/util/pipe.o 00:01:33.230 CC lib/util/strerror_tls.o 00:01:33.230 CC lib/util/string.o 00:01:33.230 CC lib/util/uuid.o 00:01:33.230 CC lib/util/fd_group.o 00:01:33.230 CC lib/util/xor.o 00:01:33.230 CC lib/util/zipf.o 00:01:33.230 CC lib/vfio_user/host/vfio_user_pci.o 00:01:33.230 CC lib/vfio_user/host/vfio_user.o 00:01:33.489 LIB libspdk_dma.a 00:01:33.489 SO libspdk_dma.so.4.0 00:01:33.489 LIB libspdk_ioat.a 00:01:33.489 SO libspdk_ioat.so.7.0 00:01:33.489 SYMLINK libspdk_dma.so 00:01:33.489 LIB libspdk_vfio_user.a 00:01:33.489 SYMLINK libspdk_ioat.so 00:01:33.489 SO libspdk_vfio_user.so.5.0 00:01:33.747 SYMLINK libspdk_vfio_user.so 00:01:33.747 LIB libspdk_util.a 00:01:33.747 SO libspdk_util.so.9.1 00:01:34.005 SYMLINK libspdk_util.so 00:01:34.005 LIB libspdk_trace_parser.a 00:01:34.262 SO libspdk_trace_parser.so.5.0 00:01:34.262 CC lib/idxd/idxd.o 00:01:34.262 CC lib/json/json_parse.o 00:01:34.262 CC lib/env_dpdk/env.o 00:01:34.262 CC lib/env_dpdk/memory.o 00:01:34.262 CC lib/json/json_util.o 00:01:34.262 CC lib/idxd/idxd_user.o 00:01:34.262 CC lib/env_dpdk/pci.o 00:01:34.262 CC lib/json/json_write.o 00:01:34.262 CC lib/idxd/idxd_kernel.o 00:01:34.262 CC lib/conf/conf.o 00:01:34.262 CC lib/vmd/vmd.o 00:01:34.262 CC lib/rdma_utils/rdma_utils.o 00:01:34.262 CC lib/env_dpdk/init.o 00:01:34.262 CC lib/vmd/led.o 00:01:34.262 CC lib/env_dpdk/threads.o 00:01:34.262 CC lib/env_dpdk/pci_ioat.o 00:01:34.262 CC lib/env_dpdk/pci_virtio.o 00:01:34.262 CC lib/env_dpdk/pci_vmd.o 00:01:34.262 CC lib/rdma_provider/common.o 00:01:34.262 CC lib/env_dpdk/pci_idxd.o 00:01:34.262 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:34.262 CC lib/env_dpdk/pci_event.o 00:01:34.262 CC lib/env_dpdk/sigbus_handler.o 00:01:34.262 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:34.262 CC lib/env_dpdk/pci_dpdk.o 00:01:34.262 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:34.262 SYMLINK libspdk_trace_parser.so 00:01:34.520 LIB libspdk_rdma_utils.a 00:01:34.520 LIB libspdk_rdma_provider.a 00:01:34.520 LIB libspdk_json.a 00:01:34.520 SO libspdk_rdma_utils.so.1.0 00:01:34.520 SO libspdk_rdma_provider.so.6.0 00:01:34.520 LIB libspdk_conf.a 00:01:34.520 SO libspdk_json.so.6.0 00:01:34.520 SO libspdk_conf.so.6.0 00:01:34.520 SYMLINK libspdk_rdma_utils.so 00:01:34.520 SYMLINK libspdk_rdma_provider.so 00:01:34.520 SYMLINK libspdk_json.so 00:01:34.520 SYMLINK libspdk_conf.so 00:01:34.777 CC lib/jsonrpc/jsonrpc_server.o 00:01:34.777 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:34.777 CC lib/jsonrpc/jsonrpc_client.o 00:01:34.777 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:34.777 LIB libspdk_idxd.a 00:01:34.777 LIB libspdk_vmd.a 00:01:34.777 SO libspdk_idxd.so.12.0 00:01:35.034 SO libspdk_vmd.so.6.0 00:01:35.034 SYMLINK libspdk_idxd.so 00:01:35.034 SYMLINK libspdk_vmd.so 00:01:35.034 LIB libspdk_jsonrpc.a 00:01:35.034 SO libspdk_jsonrpc.so.6.0 00:01:35.034 SYMLINK libspdk_jsonrpc.so 00:01:35.293 CC lib/rpc/rpc.o 00:01:35.551 LIB libspdk_rpc.a 00:01:35.551 SO libspdk_rpc.so.6.0 00:01:35.551 SYMLINK libspdk_rpc.so 00:01:35.809 CC lib/trace/trace.o 00:01:35.809 CC lib/notify/notify.o 00:01:35.809 CC lib/keyring/keyring.o 00:01:35.809 CC lib/trace/trace_flags.o 00:01:35.809 CC lib/notify/notify_rpc.o 00:01:35.809 CC lib/keyring/keyring_rpc.o 00:01:35.809 CC lib/trace/trace_rpc.o 00:01:35.809 LIB libspdk_notify.a 00:01:35.809 SO libspdk_notify.so.6.0 00:01:36.067 LIB libspdk_keyring.a 00:01:36.067 SYMLINK libspdk_notify.so 00:01:36.067 LIB libspdk_trace.a 00:01:36.067 SO libspdk_keyring.so.1.0 00:01:36.067 SO libspdk_trace.so.10.0 00:01:36.067 SYMLINK libspdk_keyring.so 00:01:36.067 SYMLINK libspdk_trace.so 00:01:36.325 LIB libspdk_env_dpdk.a 00:01:36.325 CC lib/sock/sock.o 00:01:36.325 CC lib/thread/thread.o 00:01:36.325 CC lib/thread/iobuf.o 00:01:36.325 CC lib/sock/sock_rpc.o 00:01:36.325 SO libspdk_env_dpdk.so.14.1 00:01:36.325 SYMLINK libspdk_env_dpdk.so 00:01:36.583 LIB libspdk_sock.a 00:01:36.583 SO libspdk_sock.so.10.0 00:01:36.841 SYMLINK libspdk_sock.so 00:01:36.841 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:36.841 CC lib/nvme/nvme_ctrlr.o 00:01:36.841 CC lib/nvme/nvme_fabric.o 00:01:36.841 CC lib/nvme/nvme_ns_cmd.o 00:01:36.841 CC lib/nvme/nvme_ns.o 00:01:36.841 CC lib/nvme/nvme_pcie_common.o 00:01:36.841 CC lib/nvme/nvme_pcie.o 00:01:36.841 CC lib/nvme/nvme_qpair.o 00:01:36.841 CC lib/nvme/nvme.o 00:01:36.841 CC lib/nvme/nvme_quirks.o 00:01:36.841 CC lib/nvme/nvme_transport.o 00:01:36.841 CC lib/nvme/nvme_discovery.o 00:01:36.841 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:36.841 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:36.841 CC lib/nvme/nvme_tcp.o 00:01:36.841 CC lib/nvme/nvme_opal.o 00:01:36.841 CC lib/nvme/nvme_io_msg.o 00:01:36.841 CC lib/nvme/nvme_poll_group.o 00:01:36.841 CC lib/nvme/nvme_zns.o 00:01:36.841 CC lib/nvme/nvme_stubs.o 00:01:36.841 CC lib/nvme/nvme_auth.o 00:01:36.841 CC lib/nvme/nvme_cuse.o 00:01:36.841 CC lib/nvme/nvme_vfio_user.o 00:01:36.841 CC lib/nvme/nvme_rdma.o 00:01:37.773 LIB libspdk_thread.a 00:01:37.773 SO libspdk_thread.so.10.1 00:01:37.773 SYMLINK libspdk_thread.so 00:01:38.031 CC lib/vfu_tgt/tgt_endpoint.o 00:01:38.031 CC lib/init/json_config.o 00:01:38.031 CC lib/blob/blobstore.o 00:01:38.031 CC lib/vfu_tgt/tgt_rpc.o 00:01:38.031 CC lib/blob/request.o 00:01:38.031 CC lib/init/subsystem.o 00:01:38.031 CC lib/blob/zeroes.o 00:01:38.031 CC lib/init/subsystem_rpc.o 00:01:38.031 CC lib/blob/blob_bs_dev.o 00:01:38.031 CC lib/init/rpc.o 00:01:38.031 CC lib/accel/accel.o 00:01:38.031 CC lib/virtio/virtio.o 00:01:38.031 CC lib/virtio/virtio_vhost_user.o 00:01:38.031 CC lib/accel/accel_rpc.o 00:01:38.031 CC lib/virtio/virtio_vfio_user.o 00:01:38.031 CC lib/accel/accel_sw.o 00:01:38.031 CC lib/virtio/virtio_pci.o 00:01:38.288 LIB libspdk_init.a 00:01:38.288 SO libspdk_init.so.5.0 00:01:38.288 LIB libspdk_virtio.a 00:01:38.288 LIB libspdk_vfu_tgt.a 00:01:38.288 SYMLINK libspdk_init.so 00:01:38.545 SO libspdk_vfu_tgt.so.3.0 00:01:38.545 SO libspdk_virtio.so.7.0 00:01:38.545 SYMLINK libspdk_vfu_tgt.so 00:01:38.545 SYMLINK libspdk_virtio.so 00:01:38.545 CC lib/event/app.o 00:01:38.545 CC lib/event/reactor.o 00:01:38.545 CC lib/event/log_rpc.o 00:01:38.545 CC lib/event/app_rpc.o 00:01:38.545 CC lib/event/scheduler_static.o 00:01:39.112 LIB libspdk_event.a 00:01:39.112 SO libspdk_event.so.14.0 00:01:39.112 LIB libspdk_accel.a 00:01:39.112 SYMLINK libspdk_event.so 00:01:39.112 SO libspdk_accel.so.15.1 00:01:39.112 SYMLINK libspdk_accel.so 00:01:39.370 LIB libspdk_nvme.a 00:01:39.370 CC lib/bdev/bdev.o 00:01:39.370 CC lib/bdev/bdev_rpc.o 00:01:39.370 CC lib/bdev/bdev_zone.o 00:01:39.370 CC lib/bdev/part.o 00:01:39.370 CC lib/bdev/scsi_nvme.o 00:01:39.370 SO libspdk_nvme.so.13.1 00:01:39.629 SYMLINK libspdk_nvme.so 00:01:41.046 LIB libspdk_blob.a 00:01:41.046 SO libspdk_blob.so.11.0 00:01:41.046 SYMLINK libspdk_blob.so 00:01:41.303 CC lib/blobfs/blobfs.o 00:01:41.303 CC lib/blobfs/tree.o 00:01:41.303 CC lib/lvol/lvol.o 00:01:41.868 LIB libspdk_bdev.a 00:01:41.868 SO libspdk_bdev.so.15.1 00:01:41.868 SYMLINK libspdk_bdev.so 00:01:42.133 CC lib/nvmf/ctrlr.o 00:01:42.133 CC lib/ublk/ublk.o 00:01:42.133 CC lib/nvmf/ctrlr_discovery.o 00:01:42.133 CC lib/ftl/ftl_core.o 00:01:42.133 CC lib/ublk/ublk_rpc.o 00:01:42.133 CC lib/nvmf/ctrlr_bdev.o 00:01:42.133 CC lib/ftl/ftl_init.o 00:01:42.133 CC lib/nvmf/subsystem.o 00:01:42.133 CC lib/scsi/dev.o 00:01:42.133 CC lib/ftl/ftl_layout.o 00:01:42.133 CC lib/nvmf/nvmf.o 00:01:42.133 CC lib/scsi/lun.o 00:01:42.133 CC lib/ftl/ftl_debug.o 00:01:42.133 CC lib/nbd/nbd.o 00:01:42.133 CC lib/scsi/port.o 00:01:42.133 CC lib/ftl/ftl_io.o 00:01:42.133 CC lib/nvmf/transport.o 00:01:42.133 CC lib/nvmf/nvmf_rpc.o 00:01:42.133 CC lib/nbd/nbd_rpc.o 00:01:42.133 CC lib/scsi/scsi.o 00:01:42.133 CC lib/nvmf/tcp.o 00:01:42.133 CC lib/ftl/ftl_sb.o 00:01:42.133 CC lib/scsi/scsi_bdev.o 00:01:42.133 CC lib/nvmf/stubs.o 00:01:42.133 CC lib/scsi/scsi_pr.o 00:01:42.133 CC lib/ftl/ftl_l2p.o 00:01:42.133 CC lib/ftl/ftl_l2p_flat.o 00:01:42.133 CC lib/scsi/scsi_rpc.o 00:01:42.133 CC lib/nvmf/vfio_user.o 00:01:42.133 CC lib/nvmf/mdns_server.o 00:01:42.133 CC lib/scsi/task.o 00:01:42.133 CC lib/ftl/ftl_nv_cache.o 00:01:42.133 CC lib/nvmf/rdma.o 00:01:42.133 CC lib/ftl/ftl_band.o 00:01:42.133 CC lib/ftl/ftl_band_ops.o 00:01:42.133 CC lib/nvmf/auth.o 00:01:42.133 CC lib/ftl/ftl_writer.o 00:01:42.133 CC lib/ftl/ftl_rq.o 00:01:42.133 CC lib/ftl/ftl_reloc.o 00:01:42.133 CC lib/ftl/ftl_l2p_cache.o 00:01:42.133 CC lib/ftl/ftl_p2l.o 00:01:42.133 CC lib/ftl/mngt/ftl_mngt.o 00:01:42.133 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:42.133 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:42.133 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:42.133 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:42.394 LIB libspdk_blobfs.a 00:01:42.394 SO libspdk_blobfs.so.10.0 00:01:42.394 SYMLINK libspdk_blobfs.so 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:42.394 LIB libspdk_lvol.a 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:42.394 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:42.658 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:42.658 SO libspdk_lvol.so.10.0 00:01:42.658 CC lib/ftl/utils/ftl_conf.o 00:01:42.658 CC lib/ftl/utils/ftl_md.o 00:01:42.658 CC lib/ftl/utils/ftl_mempool.o 00:01:42.658 CC lib/ftl/utils/ftl_bitmap.o 00:01:42.658 CC lib/ftl/utils/ftl_property.o 00:01:42.658 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:42.658 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:42.658 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:42.658 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:42.658 SYMLINK libspdk_lvol.so 00:01:42.658 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:42.658 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:42.658 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:42.658 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:42.658 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:42.658 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:42.916 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:42.916 CC lib/ftl/base/ftl_base_dev.o 00:01:42.916 CC lib/ftl/base/ftl_base_bdev.o 00:01:42.916 CC lib/ftl/ftl_trace.o 00:01:42.916 LIB libspdk_nbd.a 00:01:42.916 SO libspdk_nbd.so.7.0 00:01:43.173 SYMLINK libspdk_nbd.so 00:01:43.173 LIB libspdk_scsi.a 00:01:43.173 SO libspdk_scsi.so.9.0 00:01:43.173 LIB libspdk_ublk.a 00:01:43.173 SO libspdk_ublk.so.3.0 00:01:43.173 SYMLINK libspdk_scsi.so 00:01:43.173 SYMLINK libspdk_ublk.so 00:01:43.432 CC lib/vhost/vhost.o 00:01:43.432 CC lib/iscsi/conn.o 00:01:43.432 CC lib/vhost/vhost_rpc.o 00:01:43.432 CC lib/iscsi/init_grp.o 00:01:43.432 CC lib/iscsi/iscsi.o 00:01:43.432 CC lib/vhost/vhost_scsi.o 00:01:43.432 CC lib/iscsi/md5.o 00:01:43.432 CC lib/vhost/vhost_blk.o 00:01:43.432 CC lib/iscsi/param.o 00:01:43.432 CC lib/vhost/rte_vhost_user.o 00:01:43.432 CC lib/iscsi/portal_grp.o 00:01:43.432 CC lib/iscsi/tgt_node.o 00:01:43.432 CC lib/iscsi/iscsi_subsystem.o 00:01:43.432 CC lib/iscsi/iscsi_rpc.o 00:01:43.432 CC lib/iscsi/task.o 00:01:43.432 LIB libspdk_ftl.a 00:01:43.690 SO libspdk_ftl.so.9.0 00:01:43.947 SYMLINK libspdk_ftl.so 00:01:44.513 LIB libspdk_vhost.a 00:01:44.770 SO libspdk_vhost.so.8.0 00:01:44.770 LIB libspdk_nvmf.a 00:01:44.770 SYMLINK libspdk_vhost.so 00:01:44.770 SO libspdk_nvmf.so.18.1 00:01:44.770 LIB libspdk_iscsi.a 00:01:45.028 SO libspdk_iscsi.so.8.0 00:01:45.028 SYMLINK libspdk_nvmf.so 00:01:45.028 SYMLINK libspdk_iscsi.so 00:01:45.286 CC module/env_dpdk/env_dpdk_rpc.o 00:01:45.286 CC module/vfu_device/vfu_virtio.o 00:01:45.286 CC module/vfu_device/vfu_virtio_blk.o 00:01:45.286 CC module/vfu_device/vfu_virtio_scsi.o 00:01:45.286 CC module/vfu_device/vfu_virtio_rpc.o 00:01:45.545 CC module/blob/bdev/blob_bdev.o 00:01:45.545 CC module/keyring/linux/keyring.o 00:01:45.545 CC module/accel/error/accel_error.o 00:01:45.545 CC module/keyring/linux/keyring_rpc.o 00:01:45.545 CC module/accel/error/accel_error_rpc.o 00:01:45.545 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:45.545 CC module/sock/posix/posix.o 00:01:45.545 CC module/accel/iaa/accel_iaa.o 00:01:45.545 CC module/accel/dsa/accel_dsa.o 00:01:45.545 CC module/accel/dsa/accel_dsa_rpc.o 00:01:45.545 CC module/scheduler/gscheduler/gscheduler.o 00:01:45.545 CC module/accel/iaa/accel_iaa_rpc.o 00:01:45.545 CC module/accel/ioat/accel_ioat.o 00:01:45.545 CC module/accel/ioat/accel_ioat_rpc.o 00:01:45.545 CC module/keyring/file/keyring.o 00:01:45.545 CC module/keyring/file/keyring_rpc.o 00:01:45.545 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:45.545 LIB libspdk_env_dpdk_rpc.a 00:01:45.545 SO libspdk_env_dpdk_rpc.so.6.0 00:01:45.545 SYMLINK libspdk_env_dpdk_rpc.so 00:01:45.545 LIB libspdk_keyring_linux.a 00:01:45.545 LIB libspdk_keyring_file.a 00:01:45.545 LIB libspdk_scheduler_gscheduler.a 00:01:45.545 LIB libspdk_scheduler_dpdk_governor.a 00:01:45.545 SO libspdk_keyring_linux.so.1.0 00:01:45.545 SO libspdk_keyring_file.so.1.0 00:01:45.545 SO libspdk_scheduler_gscheduler.so.4.0 00:01:45.545 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:45.545 LIB libspdk_accel_error.a 00:01:45.545 LIB libspdk_accel_ioat.a 00:01:45.545 LIB libspdk_scheduler_dynamic.a 00:01:45.545 LIB libspdk_accel_iaa.a 00:01:45.545 SO libspdk_accel_error.so.2.0 00:01:45.803 SO libspdk_accel_ioat.so.6.0 00:01:45.803 SO libspdk_scheduler_dynamic.so.4.0 00:01:45.803 SYMLINK libspdk_keyring_linux.so 00:01:45.803 SYMLINK libspdk_scheduler_gscheduler.so 00:01:45.803 SYMLINK libspdk_keyring_file.so 00:01:45.803 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:45.803 SO libspdk_accel_iaa.so.3.0 00:01:45.803 LIB libspdk_accel_dsa.a 00:01:45.803 LIB libspdk_blob_bdev.a 00:01:45.803 SYMLINK libspdk_accel_error.so 00:01:45.803 SYMLINK libspdk_scheduler_dynamic.so 00:01:45.803 SYMLINK libspdk_accel_ioat.so 00:01:45.803 SO libspdk_blob_bdev.so.11.0 00:01:45.803 SO libspdk_accel_dsa.so.5.0 00:01:45.803 SYMLINK libspdk_accel_iaa.so 00:01:45.803 SYMLINK libspdk_blob_bdev.so 00:01:45.803 SYMLINK libspdk_accel_dsa.so 00:01:46.066 LIB libspdk_vfu_device.a 00:01:46.066 CC module/bdev/error/vbdev_error.o 00:01:46.066 CC module/blobfs/bdev/blobfs_bdev.o 00:01:46.066 CC module/bdev/passthru/vbdev_passthru.o 00:01:46.066 CC module/bdev/lvol/vbdev_lvol.o 00:01:46.066 CC module/bdev/error/vbdev_error_rpc.o 00:01:46.066 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:46.066 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:46.066 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:46.066 CC module/bdev/null/bdev_null.o 00:01:46.066 CC module/bdev/delay/vbdev_delay.o 00:01:46.066 CC module/bdev/gpt/gpt.o 00:01:46.066 CC module/bdev/gpt/vbdev_gpt.o 00:01:46.066 CC module/bdev/null/bdev_null_rpc.o 00:01:46.066 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:46.066 CC module/bdev/ftl/bdev_ftl.o 00:01:46.066 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:46.066 CC module/bdev/malloc/bdev_malloc.o 00:01:46.066 CC module/bdev/raid/bdev_raid.o 00:01:46.066 SO libspdk_vfu_device.so.3.0 00:01:46.066 CC module/bdev/nvme/bdev_nvme.o 00:01:46.066 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:46.066 CC module/bdev/split/vbdev_split.o 00:01:46.066 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:46.066 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:46.066 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:46.066 CC module/bdev/nvme/nvme_rpc.o 00:01:46.066 CC module/bdev/split/vbdev_split_rpc.o 00:01:46.066 CC module/bdev/raid/bdev_raid_rpc.o 00:01:46.066 CC module/bdev/nvme/bdev_mdns_client.o 00:01:46.067 CC module/bdev/aio/bdev_aio.o 00:01:46.067 CC module/bdev/raid/bdev_raid_sb.o 00:01:46.067 CC module/bdev/nvme/vbdev_opal.o 00:01:46.067 CC module/bdev/raid/raid0.o 00:01:46.067 CC module/bdev/raid/raid1.o 00:01:46.067 CC module/bdev/aio/bdev_aio_rpc.o 00:01:46.067 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:46.067 CC module/bdev/raid/concat.o 00:01:46.067 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:46.067 CC module/bdev/iscsi/bdev_iscsi.o 00:01:46.067 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:46.067 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:46.067 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:46.067 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:46.067 SYMLINK libspdk_vfu_device.so 00:01:46.324 LIB libspdk_sock_posix.a 00:01:46.324 SO libspdk_sock_posix.so.6.0 00:01:46.324 SYMLINK libspdk_sock_posix.so 00:01:46.324 LIB libspdk_blobfs_bdev.a 00:01:46.582 SO libspdk_blobfs_bdev.so.6.0 00:01:46.582 LIB libspdk_bdev_split.a 00:01:46.582 LIB libspdk_bdev_null.a 00:01:46.582 SO libspdk_bdev_split.so.6.0 00:01:46.582 LIB libspdk_bdev_error.a 00:01:46.582 LIB libspdk_bdev_ftl.a 00:01:46.582 SYMLINK libspdk_blobfs_bdev.so 00:01:46.582 SO libspdk_bdev_null.so.6.0 00:01:46.582 SO libspdk_bdev_error.so.6.0 00:01:46.582 SO libspdk_bdev_ftl.so.6.0 00:01:46.582 LIB libspdk_bdev_passthru.a 00:01:46.582 SYMLINK libspdk_bdev_split.so 00:01:46.582 LIB libspdk_bdev_gpt.a 00:01:46.582 SO libspdk_bdev_passthru.so.6.0 00:01:46.582 SYMLINK libspdk_bdev_null.so 00:01:46.582 SYMLINK libspdk_bdev_error.so 00:01:46.582 SYMLINK libspdk_bdev_ftl.so 00:01:46.582 SO libspdk_bdev_gpt.so.6.0 00:01:46.582 LIB libspdk_bdev_zone_block.a 00:01:46.582 LIB libspdk_bdev_malloc.a 00:01:46.582 LIB libspdk_bdev_aio.a 00:01:46.582 SYMLINK libspdk_bdev_passthru.so 00:01:46.582 SO libspdk_bdev_zone_block.so.6.0 00:01:46.582 SO libspdk_bdev_malloc.so.6.0 00:01:46.582 SO libspdk_bdev_aio.so.6.0 00:01:46.582 SYMLINK libspdk_bdev_gpt.so 00:01:46.582 LIB libspdk_bdev_iscsi.a 00:01:46.582 SYMLINK libspdk_bdev_zone_block.so 00:01:46.582 SO libspdk_bdev_iscsi.so.6.0 00:01:46.582 LIB libspdk_bdev_delay.a 00:01:46.582 SYMLINK libspdk_bdev_malloc.so 00:01:46.582 SYMLINK libspdk_bdev_aio.so 00:01:46.839 SO libspdk_bdev_delay.so.6.0 00:01:46.839 SYMLINK libspdk_bdev_iscsi.so 00:01:46.839 LIB libspdk_bdev_lvol.a 00:01:46.839 SYMLINK libspdk_bdev_delay.so 00:01:46.839 SO libspdk_bdev_lvol.so.6.0 00:01:46.839 LIB libspdk_bdev_virtio.a 00:01:46.839 SYMLINK libspdk_bdev_lvol.so 00:01:46.839 SO libspdk_bdev_virtio.so.6.0 00:01:46.839 SYMLINK libspdk_bdev_virtio.so 00:01:47.406 LIB libspdk_bdev_raid.a 00:01:47.406 SO libspdk_bdev_raid.so.6.0 00:01:47.406 SYMLINK libspdk_bdev_raid.so 00:01:48.337 LIB libspdk_bdev_nvme.a 00:01:48.337 SO libspdk_bdev_nvme.so.7.0 00:01:48.337 SYMLINK libspdk_bdev_nvme.so 00:01:48.903 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:48.903 CC module/event/subsystems/keyring/keyring.o 00:01:48.903 CC module/event/subsystems/scheduler/scheduler.o 00:01:48.903 CC module/event/subsystems/vmd/vmd.o 00:01:48.903 CC module/event/subsystems/sock/sock.o 00:01:48.903 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:48.903 CC module/event/subsystems/iobuf/iobuf.o 00:01:48.903 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.903 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:48.903 LIB libspdk_event_keyring.a 00:01:48.903 LIB libspdk_event_vhost_blk.a 00:01:48.903 LIB libspdk_event_scheduler.a 00:01:48.903 LIB libspdk_event_vfu_tgt.a 00:01:48.903 LIB libspdk_event_vmd.a 00:01:48.903 LIB libspdk_event_sock.a 00:01:48.903 LIB libspdk_event_iobuf.a 00:01:48.903 SO libspdk_event_keyring.so.1.0 00:01:48.903 SO libspdk_event_vhost_blk.so.3.0 00:01:48.903 SO libspdk_event_vfu_tgt.so.3.0 00:01:48.903 SO libspdk_event_scheduler.so.4.0 00:01:48.903 SO libspdk_event_sock.so.5.0 00:01:48.903 SO libspdk_event_vmd.so.6.0 00:01:48.903 SO libspdk_event_iobuf.so.3.0 00:01:49.162 SYMLINK libspdk_event_keyring.so 00:01:49.162 SYMLINK libspdk_event_vhost_blk.so 00:01:49.162 SYMLINK libspdk_event_vfu_tgt.so 00:01:49.162 SYMLINK libspdk_event_scheduler.so 00:01:49.162 SYMLINK libspdk_event_sock.so 00:01:49.162 SYMLINK libspdk_event_vmd.so 00:01:49.162 SYMLINK libspdk_event_iobuf.so 00:01:49.162 CC module/event/subsystems/accel/accel.o 00:01:49.421 LIB libspdk_event_accel.a 00:01:49.421 SO libspdk_event_accel.so.6.0 00:01:49.421 SYMLINK libspdk_event_accel.so 00:01:49.678 CC module/event/subsystems/bdev/bdev.o 00:01:49.937 LIB libspdk_event_bdev.a 00:01:49.937 SO libspdk_event_bdev.so.6.0 00:01:49.937 SYMLINK libspdk_event_bdev.so 00:01:49.937 CC module/event/subsystems/scsi/scsi.o 00:01:49.937 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:49.937 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:49.937 CC module/event/subsystems/nbd/nbd.o 00:01:49.937 CC module/event/subsystems/ublk/ublk.o 00:01:50.194 LIB libspdk_event_nbd.a 00:01:50.194 LIB libspdk_event_ublk.a 00:01:50.194 LIB libspdk_event_scsi.a 00:01:50.194 SO libspdk_event_nbd.so.6.0 00:01:50.194 SO libspdk_event_ublk.so.3.0 00:01:50.194 SO libspdk_event_scsi.so.6.0 00:01:50.194 SYMLINK libspdk_event_nbd.so 00:01:50.194 SYMLINK libspdk_event_ublk.so 00:01:50.194 SYMLINK libspdk_event_scsi.so 00:01:50.194 LIB libspdk_event_nvmf.a 00:01:50.450 SO libspdk_event_nvmf.so.6.0 00:01:50.450 SYMLINK libspdk_event_nvmf.so 00:01:50.450 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:50.450 CC module/event/subsystems/iscsi/iscsi.o 00:01:50.708 LIB libspdk_event_vhost_scsi.a 00:01:50.708 SO libspdk_event_vhost_scsi.so.3.0 00:01:50.708 LIB libspdk_event_iscsi.a 00:01:50.708 SO libspdk_event_iscsi.so.6.0 00:01:50.708 SYMLINK libspdk_event_vhost_scsi.so 00:01:50.708 SYMLINK libspdk_event_iscsi.so 00:01:50.708 SO libspdk.so.6.0 00:01:50.708 SYMLINK libspdk.so 00:01:50.968 TEST_HEADER include/spdk/accel_module.h 00:01:50.968 TEST_HEADER include/spdk/accel.h 00:01:50.968 CC test/rpc_client/rpc_client_test.o 00:01:50.968 TEST_HEADER include/spdk/assert.h 00:01:50.968 TEST_HEADER include/spdk/barrier.h 00:01:50.968 TEST_HEADER include/spdk/base64.h 00:01:50.968 TEST_HEADER include/spdk/bdev.h 00:01:50.968 TEST_HEADER include/spdk/bdev_module.h 00:01:50.968 CC app/trace_record/trace_record.o 00:01:50.968 TEST_HEADER include/spdk/bdev_zone.h 00:01:50.968 CC app/spdk_nvme_perf/perf.o 00:01:50.968 TEST_HEADER include/spdk/bit_array.h 00:01:50.968 TEST_HEADER include/spdk/bit_pool.h 00:01:50.968 CC app/spdk_nvme_identify/identify.o 00:01:50.968 CC app/spdk_top/spdk_top.o 00:01:50.968 TEST_HEADER include/spdk/blob_bdev.h 00:01:50.968 CXX app/trace/trace.o 00:01:50.968 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:50.968 TEST_HEADER include/spdk/blobfs.h 00:01:50.968 CC app/spdk_nvme_discover/discovery_aer.o 00:01:50.968 TEST_HEADER include/spdk/blob.h 00:01:50.968 CC app/spdk_lspci/spdk_lspci.o 00:01:50.968 TEST_HEADER include/spdk/conf.h 00:01:50.968 TEST_HEADER include/spdk/config.h 00:01:50.968 TEST_HEADER include/spdk/cpuset.h 00:01:50.968 TEST_HEADER include/spdk/crc16.h 00:01:50.968 TEST_HEADER include/spdk/crc32.h 00:01:50.968 TEST_HEADER include/spdk/crc64.h 00:01:50.968 TEST_HEADER include/spdk/dif.h 00:01:50.968 TEST_HEADER include/spdk/dma.h 00:01:50.968 TEST_HEADER include/spdk/endian.h 00:01:50.968 TEST_HEADER include/spdk/env_dpdk.h 00:01:50.968 TEST_HEADER include/spdk/env.h 00:01:50.968 TEST_HEADER include/spdk/event.h 00:01:50.968 TEST_HEADER include/spdk/fd_group.h 00:01:50.968 TEST_HEADER include/spdk/fd.h 00:01:50.968 TEST_HEADER include/spdk/file.h 00:01:50.968 TEST_HEADER include/spdk/ftl.h 00:01:50.968 TEST_HEADER include/spdk/gpt_spec.h 00:01:50.968 TEST_HEADER include/spdk/hexlify.h 00:01:50.968 TEST_HEADER include/spdk/histogram_data.h 00:01:50.968 TEST_HEADER include/spdk/idxd.h 00:01:50.968 TEST_HEADER include/spdk/idxd_spec.h 00:01:50.968 TEST_HEADER include/spdk/init.h 00:01:50.968 TEST_HEADER include/spdk/ioat.h 00:01:50.968 TEST_HEADER include/spdk/iscsi_spec.h 00:01:50.968 TEST_HEADER include/spdk/ioat_spec.h 00:01:50.968 TEST_HEADER include/spdk/json.h 00:01:50.968 TEST_HEADER include/spdk/jsonrpc.h 00:01:50.968 TEST_HEADER include/spdk/keyring_module.h 00:01:50.968 TEST_HEADER include/spdk/keyring.h 00:01:50.968 TEST_HEADER include/spdk/likely.h 00:01:50.968 TEST_HEADER include/spdk/log.h 00:01:50.968 TEST_HEADER include/spdk/lvol.h 00:01:50.968 TEST_HEADER include/spdk/memory.h 00:01:50.968 TEST_HEADER include/spdk/mmio.h 00:01:50.968 TEST_HEADER include/spdk/nbd.h 00:01:50.968 TEST_HEADER include/spdk/notify.h 00:01:50.968 TEST_HEADER include/spdk/nvme.h 00:01:50.968 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:50.968 TEST_HEADER include/spdk/nvme_intel.h 00:01:50.968 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:50.968 TEST_HEADER include/spdk/nvme_spec.h 00:01:50.968 TEST_HEADER include/spdk/nvme_zns.h 00:01:50.968 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:50.968 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:50.968 TEST_HEADER include/spdk/nvmf.h 00:01:50.968 TEST_HEADER include/spdk/nvmf_spec.h 00:01:50.968 TEST_HEADER include/spdk/opal.h 00:01:50.968 TEST_HEADER include/spdk/nvmf_transport.h 00:01:50.968 TEST_HEADER include/spdk/opal_spec.h 00:01:50.968 TEST_HEADER include/spdk/pci_ids.h 00:01:50.968 TEST_HEADER include/spdk/pipe.h 00:01:50.968 TEST_HEADER include/spdk/queue.h 00:01:50.968 TEST_HEADER include/spdk/rpc.h 00:01:50.968 TEST_HEADER include/spdk/reduce.h 00:01:50.968 TEST_HEADER include/spdk/scsi.h 00:01:50.968 TEST_HEADER include/spdk/scheduler.h 00:01:50.968 TEST_HEADER include/spdk/scsi_spec.h 00:01:50.968 TEST_HEADER include/spdk/sock.h 00:01:50.968 TEST_HEADER include/spdk/stdinc.h 00:01:50.968 TEST_HEADER include/spdk/string.h 00:01:50.968 TEST_HEADER include/spdk/thread.h 00:01:50.968 TEST_HEADER include/spdk/trace.h 00:01:50.968 TEST_HEADER include/spdk/trace_parser.h 00:01:50.968 TEST_HEADER include/spdk/tree.h 00:01:50.968 TEST_HEADER include/spdk/ublk.h 00:01:50.968 TEST_HEADER include/spdk/util.h 00:01:50.968 TEST_HEADER include/spdk/uuid.h 00:01:50.968 TEST_HEADER include/spdk/version.h 00:01:50.968 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:50.968 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:50.968 TEST_HEADER include/spdk/vhost.h 00:01:50.968 TEST_HEADER include/spdk/vmd.h 00:01:50.968 TEST_HEADER include/spdk/xor.h 00:01:50.968 TEST_HEADER include/spdk/zipf.h 00:01:50.968 CXX test/cpp_headers/accel.o 00:01:50.968 CXX test/cpp_headers/accel_module.o 00:01:50.968 CXX test/cpp_headers/assert.o 00:01:50.968 CXX test/cpp_headers/barrier.o 00:01:50.968 CXX test/cpp_headers/base64.o 00:01:50.968 CXX test/cpp_headers/bdev.o 00:01:50.968 CXX test/cpp_headers/bdev_module.o 00:01:50.968 CXX test/cpp_headers/bdev_zone.o 00:01:50.968 CXX test/cpp_headers/bit_pool.o 00:01:50.969 CXX test/cpp_headers/bit_array.o 00:01:50.969 CXX test/cpp_headers/blobfs_bdev.o 00:01:50.969 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:50.969 CXX test/cpp_headers/blob_bdev.o 00:01:50.969 CXX test/cpp_headers/blobfs.o 00:01:50.969 CXX test/cpp_headers/blob.o 00:01:50.969 CC app/spdk_dd/spdk_dd.o 00:01:50.969 CXX test/cpp_headers/conf.o 00:01:50.969 CXX test/cpp_headers/config.o 00:01:50.969 CXX test/cpp_headers/cpuset.o 00:01:50.969 CXX test/cpp_headers/crc16.o 00:01:50.969 CC app/iscsi_tgt/iscsi_tgt.o 00:01:51.235 CC app/nvmf_tgt/nvmf_main.o 00:01:51.235 CXX test/cpp_headers/crc32.o 00:01:51.235 CC test/app/histogram_perf/histogram_perf.o 00:01:51.235 CC examples/util/zipf/zipf.o 00:01:51.235 CC test/thread/poller_perf/poller_perf.o 00:01:51.235 CC examples/ioat/perf/perf.o 00:01:51.235 CC test/env/pci/pci_ut.o 00:01:51.235 CC test/env/vtophys/vtophys.o 00:01:51.235 CC examples/ioat/verify/verify.o 00:01:51.235 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:51.235 CC test/app/jsoncat/jsoncat.o 00:01:51.235 CC app/spdk_tgt/spdk_tgt.o 00:01:51.235 CC test/env/memory/memory_ut.o 00:01:51.235 CC test/app/stub/stub.o 00:01:51.235 CC app/fio/nvme/fio_plugin.o 00:01:51.235 CC test/dma/test_dma/test_dma.o 00:01:51.235 CC test/app/bdev_svc/bdev_svc.o 00:01:51.235 CC app/fio/bdev/fio_plugin.o 00:01:51.235 LINK spdk_lspci 00:01:51.235 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:51.235 CC test/env/mem_callbacks/mem_callbacks.o 00:01:51.494 LINK rpc_client_test 00:01:51.494 LINK spdk_nvme_discover 00:01:51.494 LINK jsoncat 00:01:51.494 LINK poller_perf 00:01:51.494 LINK histogram_perf 00:01:51.494 LINK vtophys 00:01:51.494 CXX test/cpp_headers/crc64.o 00:01:51.494 LINK interrupt_tgt 00:01:51.494 LINK zipf 00:01:51.494 CXX test/cpp_headers/dif.o 00:01:51.494 LINK nvmf_tgt 00:01:51.494 CXX test/cpp_headers/dma.o 00:01:51.494 CXX test/cpp_headers/endian.o 00:01:51.494 CXX test/cpp_headers/env_dpdk.o 00:01:51.494 CXX test/cpp_headers/env.o 00:01:51.494 CXX test/cpp_headers/event.o 00:01:51.494 CXX test/cpp_headers/fd_group.o 00:01:51.494 LINK env_dpdk_post_init 00:01:51.494 CXX test/cpp_headers/fd.o 00:01:51.494 CXX test/cpp_headers/file.o 00:01:51.494 CXX test/cpp_headers/ftl.o 00:01:51.494 CXX test/cpp_headers/gpt_spec.o 00:01:51.494 LINK iscsi_tgt 00:01:51.494 LINK spdk_trace_record 00:01:51.494 CXX test/cpp_headers/hexlify.o 00:01:51.494 LINK stub 00:01:51.494 CXX test/cpp_headers/histogram_data.o 00:01:51.494 CXX test/cpp_headers/idxd.o 00:01:51.759 LINK verify 00:01:51.759 CXX test/cpp_headers/idxd_spec.o 00:01:51.759 LINK ioat_perf 00:01:51.759 LINK bdev_svc 00:01:51.759 LINK spdk_tgt 00:01:51.759 CXX test/cpp_headers/init.o 00:01:51.759 CXX test/cpp_headers/ioat.o 00:01:51.759 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:51.759 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:51.759 CXX test/cpp_headers/ioat_spec.o 00:01:51.759 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:51.759 CXX test/cpp_headers/iscsi_spec.o 00:01:51.759 CXX test/cpp_headers/json.o 00:01:51.759 CXX test/cpp_headers/jsonrpc.o 00:01:51.759 CXX test/cpp_headers/keyring.o 00:01:51.759 LINK spdk_dd 00:01:51.759 LINK spdk_trace 00:01:52.022 CXX test/cpp_headers/keyring_module.o 00:01:52.022 LINK pci_ut 00:01:52.022 CXX test/cpp_headers/likely.o 00:01:52.022 CXX test/cpp_headers/log.o 00:01:52.022 CXX test/cpp_headers/lvol.o 00:01:52.022 CXX test/cpp_headers/memory.o 00:01:52.022 CXX test/cpp_headers/mmio.o 00:01:52.022 CXX test/cpp_headers/nbd.o 00:01:52.022 CXX test/cpp_headers/notify.o 00:01:52.022 LINK test_dma 00:01:52.022 CXX test/cpp_headers/nvme.o 00:01:52.022 CXX test/cpp_headers/nvme_intel.o 00:01:52.022 CXX test/cpp_headers/nvme_ocssd.o 00:01:52.022 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:52.022 CXX test/cpp_headers/nvme_spec.o 00:01:52.022 CXX test/cpp_headers/nvme_zns.o 00:01:52.022 CXX test/cpp_headers/nvmf_cmd.o 00:01:52.022 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:52.022 CXX test/cpp_headers/nvmf.o 00:01:52.022 CXX test/cpp_headers/nvmf_spec.o 00:01:52.022 CXX test/cpp_headers/nvmf_transport.o 00:01:52.022 CXX test/cpp_headers/opal.o 00:01:52.022 CC test/event/event_perf/event_perf.o 00:01:52.022 CC test/event/reactor/reactor.o 00:01:52.022 CXX test/cpp_headers/opal_spec.o 00:01:52.022 CC test/event/reactor_perf/reactor_perf.o 00:01:52.022 LINK nvme_fuzz 00:01:52.282 CC test/event/app_repeat/app_repeat.o 00:01:52.282 CXX test/cpp_headers/pci_ids.o 00:01:52.282 CXX test/cpp_headers/pipe.o 00:01:52.282 CXX test/cpp_headers/queue.o 00:01:52.282 CXX test/cpp_headers/reduce.o 00:01:52.282 CC test/event/scheduler/scheduler.o 00:01:52.282 CC examples/sock/hello_world/hello_sock.o 00:01:52.282 CXX test/cpp_headers/rpc.o 00:01:52.282 CXX test/cpp_headers/scheduler.o 00:01:52.282 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.282 CC examples/idxd/perf/perf.o 00:01:52.282 CC examples/thread/thread/thread_ex.o 00:01:52.282 LINK spdk_nvme 00:01:52.282 CXX test/cpp_headers/scsi.o 00:01:52.282 CC examples/vmd/led/led.o 00:01:52.282 LINK spdk_bdev 00:01:52.282 CXX test/cpp_headers/scsi_spec.o 00:01:52.282 CXX test/cpp_headers/sock.o 00:01:52.282 CXX test/cpp_headers/stdinc.o 00:01:52.282 CXX test/cpp_headers/string.o 00:01:52.282 CXX test/cpp_headers/thread.o 00:01:52.282 CXX test/cpp_headers/trace.o 00:01:52.545 LINK event_perf 00:01:52.545 LINK reactor 00:01:52.545 LINK reactor_perf 00:01:52.545 CXX test/cpp_headers/trace_parser.o 00:01:52.545 CXX test/cpp_headers/tree.o 00:01:52.545 CXX test/cpp_headers/ublk.o 00:01:52.545 CXX test/cpp_headers/util.o 00:01:52.545 CXX test/cpp_headers/uuid.o 00:01:52.545 CXX test/cpp_headers/version.o 00:01:52.545 CXX test/cpp_headers/vfio_user_pci.o 00:01:52.545 CXX test/cpp_headers/vfio_user_spec.o 00:01:52.545 CXX test/cpp_headers/vhost.o 00:01:52.545 CXX test/cpp_headers/vmd.o 00:01:52.545 CXX test/cpp_headers/xor.o 00:01:52.545 LINK spdk_nvme_perf 00:01:52.545 LINK app_repeat 00:01:52.545 CXX test/cpp_headers/zipf.o 00:01:52.545 CC app/vhost/vhost.o 00:01:52.545 LINK lsvmd 00:01:52.545 LINK mem_callbacks 00:01:52.545 LINK vhost_fuzz 00:01:52.545 LINK led 00:01:52.545 LINK spdk_nvme_identify 00:01:52.545 LINK scheduler 00:01:52.804 LINK spdk_top 00:01:52.804 LINK hello_sock 00:01:52.804 CC test/nvme/err_injection/err_injection.o 00:01:52.804 CC test/nvme/aer/aer.o 00:01:52.804 CC test/accel/dif/dif.o 00:01:52.804 CC test/nvme/startup/startup.o 00:01:52.804 CC test/nvme/e2edp/nvme_dp.o 00:01:52.804 CC test/nvme/overhead/overhead.o 00:01:52.804 CC test/nvme/reserve/reserve.o 00:01:52.804 CC test/nvme/reset/reset.o 00:01:52.804 CC test/nvme/sgl/sgl.o 00:01:52.804 CC test/nvme/simple_copy/simple_copy.o 00:01:52.804 CC test/blobfs/mkfs/mkfs.o 00:01:52.804 LINK thread 00:01:52.804 CC test/nvme/connect_stress/connect_stress.o 00:01:52.804 CC test/nvme/compliance/nvme_compliance.o 00:01:52.804 CC test/nvme/boot_partition/boot_partition.o 00:01:52.804 CC test/nvme/cuse/cuse.o 00:01:52.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:52.804 CC test/nvme/fdp/fdp.o 00:01:52.804 CC test/nvme/fused_ordering/fused_ordering.o 00:01:52.804 CC test/lvol/esnap/esnap.o 00:01:52.804 LINK idxd_perf 00:01:52.804 LINK vhost 00:01:53.061 LINK err_injection 00:01:53.061 LINK mkfs 00:01:53.061 LINK reserve 00:01:53.061 LINK boot_partition 00:01:53.061 LINK startup 00:01:53.061 LINK simple_copy 00:01:53.061 LINK nvme_dp 00:01:53.061 CC examples/nvme/reconnect/reconnect.o 00:01:53.061 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:53.061 LINK connect_stress 00:01:53.061 CC examples/nvme/hello_world/hello_world.o 00:01:53.061 CC examples/nvme/abort/abort.o 00:01:53.061 CC examples/nvme/hotplug/hotplug.o 00:01:53.061 LINK aer 00:01:53.061 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:53.061 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:53.061 CC examples/nvme/arbitration/arbitration.o 00:01:53.061 LINK overhead 00:01:53.061 LINK fused_ordering 00:01:53.319 LINK doorbell_aers 00:01:53.319 LINK reset 00:01:53.319 LINK sgl 00:01:53.319 LINK memory_ut 00:01:53.319 CC examples/accel/perf/accel_perf.o 00:01:53.319 LINK fdp 00:01:53.319 CC examples/blob/hello_world/hello_blob.o 00:01:53.319 CC examples/blob/cli/blobcli.o 00:01:53.319 LINK dif 00:01:53.319 LINK nvme_compliance 00:01:53.319 LINK hello_world 00:01:53.577 LINK cmb_copy 00:01:53.577 LINK pmr_persistence 00:01:53.577 LINK hotplug 00:01:53.577 LINK arbitration 00:01:53.577 LINK hello_blob 00:01:53.577 LINK reconnect 00:01:53.577 LINK abort 00:01:53.856 LINK nvme_manage 00:01:53.856 CC test/bdev/bdevio/bdevio.o 00:01:53.856 LINK accel_perf 00:01:53.856 LINK blobcli 00:01:54.157 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.157 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.157 LINK iscsi_fuzz 00:01:54.157 LINK bdevio 00:01:54.415 LINK cuse 00:01:54.415 LINK hello_bdev 00:01:54.981 LINK bdevperf 00:01:55.239 CC examples/nvmf/nvmf/nvmf.o 00:01:55.497 LINK nvmf 00:01:58.028 LINK esnap 00:01:58.028 00:01:58.028 real 0m48.845s 00:01:58.028 user 10m5.863s 00:01:58.028 sys 2m27.707s 00:01:58.028 16:18:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:58.028 16:18:37 make -- common/autotest_common.sh@10 -- $ set +x 00:01:58.028 ************************************ 00:01:58.028 END TEST make 00:01:58.028 ************************************ 00:01:58.028 16:18:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:58.028 16:18:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:58.028 16:18:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:58.028 16:18:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:58.028 16:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.028 16:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:58.028 16:18:37 -- pm/common@44 -- $ pid=1303514 00:01:58.028 16:18:37 -- pm/common@50 -- $ kill -TERM 1303514 00:01:58.028 16:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.028 16:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:58.028 16:18:37 -- pm/common@44 -- $ pid=1303516 00:01:58.028 16:18:37 -- pm/common@50 -- $ kill -TERM 1303516 00:01:58.028 16:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.028 16:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:58.028 16:18:37 -- pm/common@44 -- $ pid=1303518 00:01:58.028 16:18:37 -- pm/common@50 -- $ kill -TERM 1303518 00:01:58.028 16:18:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.028 16:18:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:58.028 16:18:37 -- pm/common@44 -- $ pid=1303546 00:01:58.028 16:18:37 -- pm/common@50 -- $ sudo -E kill -TERM 1303546 00:01:58.028 16:18:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.028 16:18:37 -- nvmf/common.sh@7 -- # uname -s 00:01:58.028 16:18:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.028 16:18:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.028 16:18:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.028 16:18:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.028 16:18:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.028 16:18:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.028 16:18:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.028 16:18:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.028 16:18:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.028 16:18:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.028 16:18:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:58.028 16:18:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:58.028 16:18:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.028 16:18:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.028 16:18:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:58.028 16:18:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:58.028 16:18:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:58.286 16:18:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.287 16:18:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.287 16:18:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.287 16:18:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.287 16:18:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.287 16:18:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.287 16:18:37 -- paths/export.sh@5 -- # export PATH 00:01:58.287 16:18:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.287 16:18:37 -- nvmf/common.sh@47 -- # : 0 00:01:58.287 16:18:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:58.287 16:18:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:58.287 16:18:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:58.287 16:18:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.287 16:18:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.287 16:18:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:58.287 16:18:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:58.287 16:18:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:58.287 16:18:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.287 16:18:37 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.287 16:18:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.287 16:18:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.287 16:18:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.287 16:18:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.287 16:18:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.287 16:18:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.287 16:18:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.287 16:18:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.287 16:18:37 -- spdk/autotest.sh@48 -- # udevadm_pid=1359477 00:01:58.287 16:18:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.287 16:18:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:58.287 16:18:37 -- pm/common@17 -- # local monitor 00:01:58.287 16:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.287 16:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.287 16:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.287 16:18:37 -- pm/common@21 -- # date +%s 00:01:58.287 16:18:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.287 16:18:37 -- pm/common@21 -- # date +%s 00:01:58.287 16:18:37 -- pm/common@25 -- # sleep 1 00:01:58.287 16:18:37 -- pm/common@21 -- # date +%s 00:01:58.287 16:18:37 -- pm/common@21 -- # date +%s 00:01:58.287 16:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053117 00:01:58.287 16:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053117 00:01:58.287 16:18:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053117 00:01:58.287 16:18:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721053117 00:01:58.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053117_collect-vmstat.pm.log 00:01:58.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053117_collect-cpu-load.pm.log 00:01:58.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053117_collect-cpu-temp.pm.log 00:01:58.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721053117_collect-bmc-pm.bmc.pm.log 00:01:59.220 16:18:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:59.220 16:18:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:59.220 16:18:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:59.220 16:18:38 -- common/autotest_common.sh@10 -- # set +x 00:01:59.220 16:18:38 -- spdk/autotest.sh@59 -- # create_test_list 00:01:59.220 16:18:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:59.220 16:18:38 -- common/autotest_common.sh@10 -- # set +x 00:01:59.220 16:18:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:59.220 16:18:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.220 16:18:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.220 16:18:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:59.220 16:18:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.220 16:18:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:59.220 16:18:38 -- common/autotest_common.sh@1455 -- # uname 00:01:59.220 16:18:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:59.220 16:18:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:59.220 16:18:38 -- common/autotest_common.sh@1475 -- # uname 00:01:59.220 16:18:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:59.220 16:18:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:59.220 16:18:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:59.220 16:18:38 -- spdk/autotest.sh@72 -- # hash lcov 00:01:59.220 16:18:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:59.220 16:18:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:59.220 --rc lcov_branch_coverage=1 00:01:59.220 --rc lcov_function_coverage=1 00:01:59.220 --rc genhtml_branch_coverage=1 00:01:59.220 --rc genhtml_function_coverage=1 00:01:59.220 --rc genhtml_legend=1 00:01:59.220 --rc geninfo_all_blocks=1 00:01:59.220 ' 00:01:59.220 16:18:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:59.220 --rc lcov_branch_coverage=1 00:01:59.220 --rc lcov_function_coverage=1 00:01:59.220 --rc genhtml_branch_coverage=1 00:01:59.220 --rc genhtml_function_coverage=1 00:01:59.220 --rc genhtml_legend=1 00:01:59.220 --rc geninfo_all_blocks=1 00:01:59.220 ' 00:01:59.220 16:18:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:59.220 --rc lcov_branch_coverage=1 00:01:59.220 --rc lcov_function_coverage=1 00:01:59.220 --rc genhtml_branch_coverage=1 00:01:59.220 --rc genhtml_function_coverage=1 00:01:59.220 --rc genhtml_legend=1 00:01:59.220 --rc geninfo_all_blocks=1 00:01:59.220 --no-external' 00:01:59.220 16:18:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:59.220 --rc lcov_branch_coverage=1 00:01:59.220 --rc lcov_function_coverage=1 00:01:59.220 --rc genhtml_branch_coverage=1 00:01:59.220 --rc genhtml_function_coverage=1 00:01:59.220 --rc genhtml_legend=1 00:01:59.220 --rc geninfo_all_blocks=1 00:01:59.220 --no-external' 00:01:59.220 16:18:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:59.220 lcov: LCOV version 1.14 00:01:59.220 16:18:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:01.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:01.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:01.123 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:01.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:01.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:01.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:01.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:01.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:16.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:16.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:34.393 16:19:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:34.393 16:19:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:34.393 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:02:34.393 16:19:11 -- spdk/autotest.sh@91 -- # rm -f 00:02:34.393 16:19:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.393 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:34.393 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:34.393 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:34.393 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:34.393 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:34.393 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:34.393 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:34.393 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:34.393 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:34.393 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:34.393 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:34.393 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:34.393 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:34.393 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:34.393 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:34.393 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:34.393 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:34.393 16:19:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:34.393 16:19:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:34.393 16:19:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:34.393 16:19:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:34.393 16:19:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:34.393 16:19:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:34.393 16:19:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:34.393 16:19:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:34.393 16:19:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:34.393 16:19:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:34.393 16:19:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:34.393 16:19:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:34.393 16:19:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:34.393 16:19:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:34.393 16:19:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:34.393 No valid GPT data, bailing 00:02:34.393 16:19:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:34.393 16:19:13 -- scripts/common.sh@391 -- # pt= 00:02:34.393 16:19:13 -- scripts/common.sh@392 -- # return 1 00:02:34.393 16:19:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:34.393 1+0 records in 00:02:34.393 1+0 records out 00:02:34.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00232247 s, 451 MB/s 00:02:34.393 16:19:13 -- spdk/autotest.sh@118 -- # sync 00:02:34.393 16:19:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:34.393 16:19:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:34.393 16:19:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:36.293 16:19:15 -- spdk/autotest.sh@124 -- # uname -s 00:02:36.293 16:19:15 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:36.293 16:19:15 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:36.293 16:19:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.293 16:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.293 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:02:36.293 ************************************ 00:02:36.293 START TEST setup.sh 00:02:36.293 ************************************ 00:02:36.293 16:19:15 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:36.293 * Looking for test storage... 00:02:36.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:36.293 16:19:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:36.293 16:19:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:36.293 16:19:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:36.293 16:19:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.293 16:19:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.293 16:19:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:36.293 ************************************ 00:02:36.293 START TEST acl 00:02:36.293 ************************************ 00:02:36.293 16:19:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:36.293 * Looking for test storage... 00:02:36.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:36.293 16:19:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:36.293 16:19:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:36.294 16:19:15 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:36.294 16:19:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:36.294 16:19:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:36.294 16:19:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:36.294 16:19:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:36.294 16:19:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:36.294 16:19:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.294 16:19:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.664 16:19:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:37.664 16:19:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:37.664 16:19:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.664 16:19:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:37.664 16:19:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.664 16:19:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:38.599 Hugepages 00:02:38.599 node hugesize free / total 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 00:02:38.599 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:38.599 16:19:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:38.599 16:19:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.599 16:19:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.599 16:19:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.858 ************************************ 00:02:38.858 START TEST denied 00:02:38.858 ************************************ 00:02:38.858 16:19:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:38.858 16:19:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:38.858 16:19:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:38.858 16:19:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:38.858 16:19:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.858 16:19:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:40.268 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.268 16:19:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.801 00:02:42.801 real 0m3.726s 00:02:42.801 user 0m1.065s 00:02:42.801 sys 0m1.754s 00:02:42.801 16:19:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:42.801 16:19:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:42.801 ************************************ 00:02:42.801 END TEST denied 00:02:42.801 ************************************ 00:02:42.801 16:19:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:42.801 16:19:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:42.801 16:19:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:42.801 16:19:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.801 16:19:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:42.801 ************************************ 00:02:42.801 START TEST allowed 00:02:42.801 ************************************ 00:02:42.801 16:19:21 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:42.801 16:19:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:42.801 16:19:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:42.801 16:19:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:42.801 16:19:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.801 16:19:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.706 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.706 16:19:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:44.706 16:19:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:44.706 16:19:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:44.706 16:19:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.706 16:19:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.607 00:02:46.607 real 0m3.828s 00:02:46.607 user 0m1.021s 00:02:46.607 sys 0m1.637s 00:02:46.607 16:19:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:46.607 16:19:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:46.607 ************************************ 00:02:46.607 END TEST allowed 00:02:46.607 ************************************ 00:02:46.607 16:19:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:46.607 00:02:46.607 real 0m10.332s 00:02:46.607 user 0m3.214s 00:02:46.607 sys 0m5.108s 00:02:46.607 16:19:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:46.607 16:19:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:46.607 ************************************ 00:02:46.607 END TEST acl 00:02:46.607 ************************************ 00:02:46.607 16:19:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:46.607 16:19:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:46.607 16:19:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.607 16:19:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.607 16:19:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.607 ************************************ 00:02:46.607 START TEST hugepages 00:02:46.607 ************************************ 00:02:46.607 16:19:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:46.607 * Looking for test storage... 00:02:46.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43685948 kB' 'MemAvailable: 47188424 kB' 'Buffers: 2704 kB' 'Cached: 10277748 kB' 'SwapCached: 0 kB' 'Active: 7279444 kB' 'Inactive: 3506596 kB' 'Active(anon): 6884852 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508952 kB' 'Mapped: 172400 kB' 'Shmem: 6379264 kB' 'KReclaimable: 190256 kB' 'Slab: 557964 kB' 'SReclaimable: 190256 kB' 'SUnreclaim: 367708 kB' 'KernelStack: 12992 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7998604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.607 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.608 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:46.609 16:19:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:46.609 16:19:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.609 16:19:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.610 16:19:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.610 ************************************ 00:02:46.610 START TEST default_setup 00:02:46.610 ************************************ 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.610 16:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.983 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:47.984 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:47.984 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:48.924 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.924 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45786900 kB' 'MemAvailable: 49289360 kB' 'Buffers: 2704 kB' 'Cached: 10277840 kB' 'SwapCached: 0 kB' 'Active: 7296856 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902264 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526132 kB' 'Mapped: 172592 kB' 'Shmem: 6379356 kB' 'KReclaimable: 190224 kB' 'Slab: 557148 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366924 kB' 'KernelStack: 12800 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.925 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45787684 kB' 'MemAvailable: 49290144 kB' 'Buffers: 2704 kB' 'Cached: 10277844 kB' 'SwapCached: 0 kB' 'Active: 7296748 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526024 kB' 'Mapped: 172516 kB' 'Shmem: 6379360 kB' 'KReclaimable: 190224 kB' 'Slab: 557100 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366876 kB' 'KernelStack: 12816 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.926 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.927 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45788264 kB' 'MemAvailable: 49290724 kB' 'Buffers: 2704 kB' 'Cached: 10277844 kB' 'SwapCached: 0 kB' 'Active: 7296364 kB' 'Inactive: 3506596 kB' 'Active(anon): 6901772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525652 kB' 'Mapped: 172456 kB' 'Shmem: 6379360 kB' 'KReclaimable: 190224 kB' 'Slab: 557300 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 367076 kB' 'KernelStack: 12832 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.928 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.929 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.930 nr_hugepages=1024 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.930 resv_hugepages=0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.930 surplus_hugepages=0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.930 anon_hugepages=0 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790868 kB' 'MemAvailable: 49293328 kB' 'Buffers: 2704 kB' 'Cached: 10277884 kB' 'SwapCached: 0 kB' 'Active: 7296700 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902108 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525944 kB' 'Mapped: 172456 kB' 'Shmem: 6379400 kB' 'KReclaimable: 190224 kB' 'Slab: 557284 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 367060 kB' 'KernelStack: 12832 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.930 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.931 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21575244 kB' 'MemUsed: 11301696 kB' 'SwapCached: 0 kB' 'Active: 4986004 kB' 'Inactive: 3264144 kB' 'Active(anon): 4797432 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7949924 kB' 'Mapped: 62112 kB' 'AnonPages: 303360 kB' 'Shmem: 4497208 kB' 'KernelStack: 6856 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301424 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 187048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.932 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.191 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.192 node0=1024 expecting 1024 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.192 00:02:49.192 real 0m2.512s 00:02:49.192 user 0m0.687s 00:02:49.192 sys 0m0.896s 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:49.192 16:19:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:49.192 ************************************ 00:02:49.192 END TEST default_setup 00:02:49.192 ************************************ 00:02:49.192 16:19:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:49.192 16:19:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:49.192 16:19:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.192 16:19:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.192 16:19:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.192 ************************************ 00:02:49.192 START TEST per_node_1G_alloc 00:02:49.192 ************************************ 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.192 16:19:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.126 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.126 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.126 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.126 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.126 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.126 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.126 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.126 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.126 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.126 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.126 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.126 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.126 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.126 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.126 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.126 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.126 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:50.391 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45780144 kB' 'MemAvailable: 49282604 kB' 'Buffers: 2704 kB' 'Cached: 10277952 kB' 'SwapCached: 0 kB' 'Active: 7297408 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902816 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526052 kB' 'Mapped: 172564 kB' 'Shmem: 6379468 kB' 'KReclaimable: 190224 kB' 'Slab: 557080 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366856 kB' 'KernelStack: 12800 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.392 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45783232 kB' 'MemAvailable: 49285692 kB' 'Buffers: 2704 kB' 'Cached: 10277956 kB' 'SwapCached: 0 kB' 'Active: 7297220 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526316 kB' 'Mapped: 172488 kB' 'Shmem: 6379472 kB' 'KReclaimable: 190224 kB' 'Slab: 557072 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366848 kB' 'KernelStack: 12832 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.393 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.394 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45782592 kB' 'MemAvailable: 49285052 kB' 'Buffers: 2704 kB' 'Cached: 10277972 kB' 'SwapCached: 0 kB' 'Active: 7297220 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526292 kB' 'Mapped: 172488 kB' 'Shmem: 6379488 kB' 'KReclaimable: 190224 kB' 'Slab: 557108 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366884 kB' 'KernelStack: 12832 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.395 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.396 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.397 nr_hugepages=1024 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.397 resv_hugepages=0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.397 surplus_hugepages=0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.397 anon_hugepages=0 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45781584 kB' 'MemAvailable: 49284044 kB' 'Buffers: 2704 kB' 'Cached: 10278012 kB' 'SwapCached: 0 kB' 'Active: 7297292 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902700 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526312 kB' 'Mapped: 172488 kB' 'Shmem: 6379528 kB' 'KReclaimable: 190224 kB' 'Slab: 557108 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366884 kB' 'KernelStack: 12832 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8015908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.397 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.398 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22620080 kB' 'MemUsed: 10256860 kB' 'SwapCached: 0 kB' 'Active: 4986540 kB' 'Inactive: 3264144 kB' 'Active(anon): 4797968 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7949928 kB' 'Mapped: 62132 kB' 'AnonPages: 303844 kB' 'Shmem: 4497212 kB' 'KernelStack: 6872 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301276 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.399 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.400 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23161448 kB' 'MemUsed: 4503304 kB' 'SwapCached: 0 kB' 'Active: 2310384 kB' 'Inactive: 242452 kB' 'Active(anon): 2104364 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2330796 kB' 'Mapped: 110356 kB' 'AnonPages: 222100 kB' 'Shmem: 1882324 kB' 'KernelStack: 5928 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75848 kB' 'Slab: 255832 kB' 'SReclaimable: 75848 kB' 'SUnreclaim: 179984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.681 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.682 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:50.683 node0=512 expecting 512 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:50.683 node1=512 expecting 512 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:50.683 00:02:50.683 real 0m1.441s 00:02:50.683 user 0m0.606s 00:02:50.683 sys 0m0.797s 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.683 16:19:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:50.683 ************************************ 00:02:50.683 END TEST per_node_1G_alloc 00:02:50.683 ************************************ 00:02:50.683 16:19:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:50.683 16:19:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:50.683 16:19:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.683 16:19:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.683 16:19:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.683 ************************************ 00:02:50.683 START TEST even_2G_alloc 00:02:50.683 ************************************ 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.683 16:19:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.633 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.633 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.633 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.633 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.633 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.633 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.633 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.633 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.633 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.633 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.893 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.893 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.893 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.893 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.893 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.894 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.894 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45799884 kB' 'MemAvailable: 49302344 kB' 'Buffers: 2704 kB' 'Cached: 10278092 kB' 'SwapCached: 0 kB' 'Active: 7296752 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902160 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525744 kB' 'Mapped: 172500 kB' 'Shmem: 6379608 kB' 'KReclaimable: 190224 kB' 'Slab: 556888 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366664 kB' 'KernelStack: 12896 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8016272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.894 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45800116 kB' 'MemAvailable: 49302576 kB' 'Buffers: 2704 kB' 'Cached: 10278096 kB' 'SwapCached: 0 kB' 'Active: 7296648 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902056 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525644 kB' 'Mapped: 172436 kB' 'Shmem: 6379612 kB' 'KReclaimable: 190224 kB' 'Slab: 556864 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366640 kB' 'KernelStack: 12944 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8016292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.895 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.896 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45800052 kB' 'MemAvailable: 49302512 kB' 'Buffers: 2704 kB' 'Cached: 10278112 kB' 'SwapCached: 0 kB' 'Active: 7297480 kB' 'Inactive: 3506596 kB' 'Active(anon): 6902888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526460 kB' 'Mapped: 172872 kB' 'Shmem: 6379628 kB' 'KReclaimable: 190224 kB' 'Slab: 556856 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366632 kB' 'KernelStack: 12928 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8017800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.897 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.898 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.899 nr_hugepages=1024 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.899 resv_hugepages=0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.899 surplus_hugepages=0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.899 anon_hugepages=0 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45797028 kB' 'MemAvailable: 49299488 kB' 'Buffers: 2704 kB' 'Cached: 10278136 kB' 'SwapCached: 0 kB' 'Active: 7299548 kB' 'Inactive: 3506596 kB' 'Active(anon): 6904956 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528492 kB' 'Mapped: 172872 kB' 'Shmem: 6379652 kB' 'KReclaimable: 190224 kB' 'Slab: 556856 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366632 kB' 'KernelStack: 12928 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8020728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.899 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.161 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.161 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.161 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.161 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.162 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22619140 kB' 'MemUsed: 10257800 kB' 'SwapCached: 0 kB' 'Active: 4992232 kB' 'Inactive: 3264144 kB' 'Active(anon): 4803660 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950008 kB' 'Mapped: 62996 kB' 'AnonPages: 309580 kB' 'Shmem: 4497292 kB' 'KernelStack: 6968 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301192 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.163 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.164 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23174980 kB' 'MemUsed: 4489772 kB' 'SwapCached: 0 kB' 'Active: 2310316 kB' 'Inactive: 242452 kB' 'Active(anon): 2104296 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2330856 kB' 'Mapped: 110360 kB' 'AnonPages: 221744 kB' 'Shmem: 1882384 kB' 'KernelStack: 5976 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75848 kB' 'Slab: 255664 kB' 'SReclaimable: 75848 kB' 'SUnreclaim: 179816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.165 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.166 node0=512 expecting 512 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:52.166 node1=512 expecting 512 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:52.166 00:02:52.166 real 0m1.498s 00:02:52.166 user 0m0.609s 00:02:52.166 sys 0m0.850s 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.166 16:19:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.166 ************************************ 00:02:52.166 END TEST even_2G_alloc 00:02:52.166 ************************************ 00:02:52.166 16:19:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:52.166 16:19:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:52.166 16:19:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.166 16:19:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.166 16:19:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.166 ************************************ 00:02:52.166 START TEST odd_alloc 00:02:52.166 ************************************ 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.166 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.167 16:19:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.544 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.544 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.544 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.544 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.544 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.544 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.544 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.544 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.544 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.544 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.544 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.544 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.544 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.544 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.544 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.544 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.544 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45793688 kB' 'MemAvailable: 49296148 kB' 'Buffers: 2704 kB' 'Cached: 10278224 kB' 'SwapCached: 0 kB' 'Active: 7294392 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899800 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523188 kB' 'Mapped: 171624 kB' 'Shmem: 6379740 kB' 'KReclaimable: 190224 kB' 'Slab: 556856 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366632 kB' 'KernelStack: 13232 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8001220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196432 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.544 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45800064 kB' 'MemAvailable: 49302524 kB' 'Buffers: 2704 kB' 'Cached: 10278228 kB' 'SwapCached: 0 kB' 'Active: 7292772 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521596 kB' 'Mapped: 171660 kB' 'Shmem: 6379744 kB' 'KReclaimable: 190224 kB' 'Slab: 556868 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366644 kB' 'KernelStack: 12752 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8001240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.545 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.546 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45800592 kB' 'MemAvailable: 49303052 kB' 'Buffers: 2704 kB' 'Cached: 10278228 kB' 'SwapCached: 0 kB' 'Active: 7292664 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521540 kB' 'Mapped: 171548 kB' 'Shmem: 6379744 kB' 'KReclaimable: 190224 kB' 'Slab: 556908 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366684 kB' 'KernelStack: 12864 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8001260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.547 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:53.548 nr_hugepages=1025 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.548 resv_hugepages=0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.548 surplus_hugepages=0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.548 anon_hugepages=0 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45800592 kB' 'MemAvailable: 49303052 kB' 'Buffers: 2704 kB' 'Cached: 10278272 kB' 'SwapCached: 0 kB' 'Active: 7292676 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521520 kB' 'Mapped: 171548 kB' 'Shmem: 6379788 kB' 'KReclaimable: 190224 kB' 'Slab: 556900 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366676 kB' 'KernelStack: 12816 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8001280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.548 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.549 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22619720 kB' 'MemUsed: 10257220 kB' 'SwapCached: 0 kB' 'Active: 4983892 kB' 'Inactive: 3264144 kB' 'Active(anon): 4795320 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950024 kB' 'Mapped: 61416 kB' 'AnonPages: 301148 kB' 'Shmem: 4497308 kB' 'KernelStack: 6888 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301244 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.550 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23181500 kB' 'MemUsed: 4483252 kB' 'SwapCached: 0 kB' 'Active: 2308820 kB' 'Inactive: 242452 kB' 'Active(anon): 2102800 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2330984 kB' 'Mapped: 110132 kB' 'AnonPages: 220400 kB' 'Shmem: 1882512 kB' 'KernelStack: 5944 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75848 kB' 'Slab: 255656 kB' 'SReclaimable: 75848 kB' 'SUnreclaim: 179808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.551 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:53.552 node0=512 expecting 513 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:53.552 node1=513 expecting 512 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:53.552 00:02:53.552 real 0m1.428s 00:02:53.552 user 0m0.617s 00:02:53.552 sys 0m0.772s 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.552 16:19:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.552 ************************************ 00:02:53.552 END TEST odd_alloc 00:02:53.552 ************************************ 00:02:53.552 16:19:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:53.552 16:19:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:53.552 16:19:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.552 16:19:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.552 16:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.552 ************************************ 00:02:53.552 START TEST custom_alloc 00:02:53.552 ************************************ 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.552 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.553 16:19:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.927 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.927 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.927 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.927 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.927 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.927 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.927 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.927 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.927 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.927 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.927 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.927 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.927 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.927 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.927 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.927 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.927 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44751484 kB' 'MemAvailable: 48253944 kB' 'Buffers: 2704 kB' 'Cached: 10278356 kB' 'SwapCached: 0 kB' 'Active: 7292804 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898212 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521520 kB' 'Mapped: 171624 kB' 'Shmem: 6379872 kB' 'KReclaimable: 190224 kB' 'Slab: 557192 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366968 kB' 'KernelStack: 12848 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8001480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.927 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.928 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44756328 kB' 'MemAvailable: 48258788 kB' 'Buffers: 2704 kB' 'Cached: 10278360 kB' 'SwapCached: 0 kB' 'Active: 7292720 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521468 kB' 'Mapped: 171560 kB' 'Shmem: 6379876 kB' 'KReclaimable: 190224 kB' 'Slab: 557168 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366944 kB' 'KernelStack: 12848 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8001132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.929 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.930 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44756088 kB' 'MemAvailable: 48258548 kB' 'Buffers: 2704 kB' 'Cached: 10278372 kB' 'SwapCached: 0 kB' 'Active: 7292380 kB' 'Inactive: 3506596 kB' 'Active(anon): 6897788 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521076 kB' 'Mapped: 171560 kB' 'Shmem: 6379888 kB' 'KReclaimable: 190224 kB' 'Slab: 557168 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366944 kB' 'KernelStack: 12784 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8001152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.931 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:54.932 nr_hugepages=1536 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.932 resv_hugepages=0 00:02:54.932 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.932 surplus_hugepages=0 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.933 anon_hugepages=0 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44755484 kB' 'MemAvailable: 48257944 kB' 'Buffers: 2704 kB' 'Cached: 10278400 kB' 'SwapCached: 0 kB' 'Active: 7292616 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521292 kB' 'Mapped: 171560 kB' 'Shmem: 6379916 kB' 'KReclaimable: 190224 kB' 'Slab: 557168 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366944 kB' 'KernelStack: 12832 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8001176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.933 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.934 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22617836 kB' 'MemUsed: 10259104 kB' 'SwapCached: 0 kB' 'Active: 4984424 kB' 'Inactive: 3264144 kB' 'Active(anon): 4795852 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950124 kB' 'Mapped: 61428 kB' 'AnonPages: 301560 kB' 'Shmem: 4497408 kB' 'KernelStack: 6856 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301232 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.935 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22136920 kB' 'MemUsed: 5527832 kB' 'SwapCached: 0 kB' 'Active: 2308512 kB' 'Inactive: 242452 kB' 'Active(anon): 2102492 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2330988 kB' 'Mapped: 110132 kB' 'AnonPages: 220100 kB' 'Shmem: 1882516 kB' 'KernelStack: 5976 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75848 kB' 'Slab: 256008 kB' 'SReclaimable: 75848 kB' 'SUnreclaim: 180160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.936 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.937 node0=512 expecting 512 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:54.937 node1=1024 expecting 1024 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:54.937 00:02:54.937 real 0m1.403s 00:02:54.937 user 0m0.595s 00:02:54.937 sys 0m0.764s 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.937 16:19:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.937 ************************************ 00:02:54.937 END TEST custom_alloc 00:02:54.937 ************************************ 00:02:54.937 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:54.937 16:19:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:54.937 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.937 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.937 16:19:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.195 ************************************ 00:02:55.195 START TEST no_shrink_alloc 00:02:55.195 ************************************ 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.195 16:19:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.125 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.125 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.125 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.125 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.125 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.125 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.125 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.125 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.125 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.125 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.125 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.125 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.125 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.125 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.125 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.125 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.125 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812168 kB' 'MemAvailable: 49314628 kB' 'Buffers: 2704 kB' 'Cached: 10278488 kB' 'SwapCached: 0 kB' 'Active: 7292924 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898332 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521556 kB' 'Mapped: 171712 kB' 'Shmem: 6380004 kB' 'KReclaimable: 190224 kB' 'Slab: 557140 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366916 kB' 'KernelStack: 12848 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.388 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.389 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812024 kB' 'MemAvailable: 49314484 kB' 'Buffers: 2704 kB' 'Cached: 10278488 kB' 'SwapCached: 0 kB' 'Active: 7293260 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521892 kB' 'Mapped: 171680 kB' 'Shmem: 6380004 kB' 'KReclaimable: 190224 kB' 'Slab: 557116 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366892 kB' 'KernelStack: 12864 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.390 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.391 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45811780 kB' 'MemAvailable: 49314240 kB' 'Buffers: 2704 kB' 'Cached: 10278508 kB' 'SwapCached: 0 kB' 'Active: 7293188 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521772 kB' 'Mapped: 171596 kB' 'Shmem: 6380024 kB' 'KReclaimable: 190224 kB' 'Slab: 557100 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366876 kB' 'KernelStack: 12896 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.392 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.393 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.394 nr_hugepages=1024 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.394 resv_hugepages=0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.394 surplus_hugepages=0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.394 anon_hugepages=0 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45811528 kB' 'MemAvailable: 49313988 kB' 'Buffers: 2704 kB' 'Cached: 10278528 kB' 'SwapCached: 0 kB' 'Active: 7292988 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898396 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521544 kB' 'Mapped: 171596 kB' 'Shmem: 6380044 kB' 'KReclaimable: 190224 kB' 'Slab: 557100 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366876 kB' 'KernelStack: 12880 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.394 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.395 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21582248 kB' 'MemUsed: 11294692 kB' 'SwapCached: 0 kB' 'Active: 4984848 kB' 'Inactive: 3264144 kB' 'Active(anon): 4796276 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950260 kB' 'Mapped: 61456 kB' 'AnonPages: 301976 kB' 'Shmem: 4497544 kB' 'KernelStack: 6920 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301220 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.396 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.397 node0=1024 expecting 1024 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.397 16:19:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.330 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.330 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.330 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.330 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.330 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.330 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.330 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.330 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.330 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:57.330 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:57.330 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:57.330 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:57.330 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:57.330 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:57.330 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:57.595 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:57.595 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:57.595 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790312 kB' 'MemAvailable: 49292772 kB' 'Buffers: 2704 kB' 'Cached: 10278596 kB' 'SwapCached: 0 kB' 'Active: 7293496 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898904 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522032 kB' 'Mapped: 171836 kB' 'Shmem: 6380112 kB' 'KReclaimable: 190224 kB' 'Slab: 557128 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366904 kB' 'KernelStack: 12912 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.595 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790216 kB' 'MemAvailable: 49292676 kB' 'Buffers: 2704 kB' 'Cached: 10278600 kB' 'SwapCached: 0 kB' 'Active: 7293440 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898848 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521936 kB' 'Mapped: 171676 kB' 'Shmem: 6380116 kB' 'KReclaimable: 190224 kB' 'Slab: 557144 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366920 kB' 'KernelStack: 12896 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.596 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.597 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790512 kB' 'MemAvailable: 49292972 kB' 'Buffers: 2704 kB' 'Cached: 10278620 kB' 'SwapCached: 0 kB' 'Active: 7293448 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521864 kB' 'Mapped: 171600 kB' 'Shmem: 6380136 kB' 'KReclaimable: 190224 kB' 'Slab: 557104 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366880 kB' 'KernelStack: 12896 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.598 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.599 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.600 nr_hugepages=1024 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.600 resv_hugepages=0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.600 surplus_hugepages=0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.600 anon_hugepages=0 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790512 kB' 'MemAvailable: 49292972 kB' 'Buffers: 2704 kB' 'Cached: 10278640 kB' 'SwapCached: 0 kB' 'Active: 7293436 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521864 kB' 'Mapped: 171600 kB' 'Shmem: 6380156 kB' 'KReclaimable: 190224 kB' 'Slab: 557104 kB' 'SReclaimable: 190224 kB' 'SUnreclaim: 366880 kB' 'KernelStack: 12896 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8001952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 13844480 kB' 'DirectMap1G: 53477376 kB' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.600 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.601 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.602 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21572392 kB' 'MemUsed: 11304548 kB' 'SwapCached: 0 kB' 'Active: 4984260 kB' 'Inactive: 3264144 kB' 'Active(anon): 4795688 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950296 kB' 'Mapped: 61460 kB' 'AnonPages: 301184 kB' 'Shmem: 4497580 kB' 'KernelStack: 6872 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114376 kB' 'Slab: 301232 kB' 'SReclaimable: 114376 kB' 'SUnreclaim: 186856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.603 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:57.862 node0=1024 expecting 1024 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:57.862 00:02:57.862 real 0m2.664s 00:02:57.862 user 0m1.076s 00:02:57.862 sys 0m1.504s 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.862 16:19:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.862 ************************************ 00:02:57.862 END TEST no_shrink_alloc 00:02:57.862 ************************************ 00:02:57.862 16:19:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:57.862 16:19:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:57.862 00:02:57.862 real 0m11.341s 00:02:57.862 user 0m4.371s 00:02:57.862 sys 0m5.819s 00:02:57.862 16:19:37 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.862 16:19:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.862 ************************************ 00:02:57.862 END TEST hugepages 00:02:57.862 ************************************ 00:02:57.862 16:19:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:57.862 16:19:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:57.862 16:19:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.862 16:19:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.862 16:19:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:57.862 ************************************ 00:02:57.862 START TEST driver 00:02:57.862 ************************************ 00:02:57.862 16:19:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:57.862 * Looking for test storage... 00:02:57.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:57.862 16:19:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:57.862 16:19:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.862 16:19:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.391 16:19:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:00.391 16:19:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.391 16:19:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.391 16:19:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:00.391 ************************************ 00:03:00.391 START TEST guess_driver 00:03:00.391 ************************************ 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:00.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:00.391 Looking for driver=vfio-pci 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.391 16:19:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.763 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.698 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.698 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.698 16:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.698 16:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:02.698 16:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:02.698 16:19:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.698 16:19:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.236 00:03:05.236 real 0m4.816s 00:03:05.236 user 0m1.112s 00:03:05.236 sys 0m1.812s 00:03:05.236 16:19:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.236 16:19:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.236 ************************************ 00:03:05.236 END TEST guess_driver 00:03:05.236 ************************************ 00:03:05.236 16:19:44 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:05.236 00:03:05.236 real 0m7.298s 00:03:05.236 user 0m1.641s 00:03:05.236 sys 0m2.776s 00:03:05.236 16:19:44 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.236 16:19:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.236 ************************************ 00:03:05.236 END TEST driver 00:03:05.236 ************************************ 00:03:05.236 16:19:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:05.236 16:19:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.236 16:19:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.236 16:19:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.236 16:19:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.236 ************************************ 00:03:05.236 START TEST devices 00:03:05.236 ************************************ 00:03:05.236 16:19:44 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.236 * Looking for test storage... 00:03:05.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.236 16:19:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:05.236 16:19:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:05.236 16:19:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.236 16:19:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:06.611 16:19:46 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:06.611 No valid GPT data, bailing 00:03:06.611 16:19:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:06.611 16:19:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:06.611 16:19:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.611 16:19:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:06.611 ************************************ 00:03:06.611 START TEST nvme_mount 00:03:06.611 ************************************ 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:06.611 16:19:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:07.545 Creating new GPT entries in memory. 00:03:07.545 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:07.545 other utilities. 00:03:07.545 16:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:07.545 16:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.545 16:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:07.545 16:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:07.546 16:19:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:08.923 Creating new GPT entries in memory. 00:03:08.923 The operation has completed successfully. 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1379421 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:08.923 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.924 16:19:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:09.859 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:09.859 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:10.118 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:10.118 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:10.118 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:10.118 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.388 16:19:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.321 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.578 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.578 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:11.578 16:19:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:11.578 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.579 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:11.579 16:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:11.579 16:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.579 16:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.951 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.952 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:12.952 00:03:12.952 real 0m6.310s 00:03:12.952 user 0m1.462s 00:03:12.952 sys 0m2.392s 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.952 16:19:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:12.952 ************************************ 00:03:12.952 END TEST nvme_mount 00:03:12.952 ************************************ 00:03:12.952 16:19:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:12.952 16:19:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:12.952 16:19:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.952 16:19:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.952 16:19:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:12.952 ************************************ 00:03:12.952 START TEST dm_mount 00:03:12.952 ************************************ 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:12.952 16:19:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:14.329 Creating new GPT entries in memory. 00:03:14.329 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:14.329 other utilities. 00:03:14.329 16:19:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:14.329 16:19:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.329 16:19:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:14.329 16:19:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:14.329 16:19:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:15.267 Creating new GPT entries in memory. 00:03:15.267 The operation has completed successfully. 00:03:15.267 16:19:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:15.267 16:19:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.267 16:19:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:15.267 16:19:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:15.267 16:19:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:16.203 The operation has completed successfully. 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1381810 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.203 16:19:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.577 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.578 16:19:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:18.512 16:19:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:18.770 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:18.770 00:03:18.770 real 0m5.773s 00:03:18.770 user 0m0.995s 00:03:18.770 sys 0m1.628s 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.770 16:19:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:18.770 ************************************ 00:03:18.770 END TEST dm_mount 00:03:18.770 ************************************ 00:03:18.770 16:19:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.770 16:19:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:19.027 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:19.027 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:19.027 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:19.027 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:19.027 16:19:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:19.027 00:03:19.027 real 0m13.941s 00:03:19.027 user 0m3.069s 00:03:19.027 sys 0m5.027s 00:03:19.027 16:19:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.027 16:19:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:19.027 ************************************ 00:03:19.027 END TEST devices 00:03:19.027 ************************************ 00:03:19.027 16:19:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:19.027 00:03:19.027 real 0m43.148s 00:03:19.027 user 0m12.392s 00:03:19.027 sys 0m18.885s 00:03:19.027 16:19:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.027 16:19:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.027 ************************************ 00:03:19.027 END TEST setup.sh 00:03:19.027 ************************************ 00:03:19.027 16:19:58 -- common/autotest_common.sh@1142 -- # return 0 00:03:19.027 16:19:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:20.402 Hugepages 00:03:20.402 node hugesize free / total 00:03:20.402 node0 1048576kB 0 / 0 00:03:20.402 node0 2048kB 2048 / 2048 00:03:20.402 node1 1048576kB 0 / 0 00:03:20.402 node1 2048kB 0 / 0 00:03:20.402 00:03:20.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:20.402 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:20.403 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:20.403 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:20.403 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:20.403 16:19:59 -- spdk/autotest.sh@130 -- # uname -s 00:03:20.403 16:19:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:20.403 16:19:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:20.403 16:19:59 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.337 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.337 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.337 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.337 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.601 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.601 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.602 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.602 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.602 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:22.544 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.544 16:20:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:23.478 16:20:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:23.478 16:20:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:23.478 16:20:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:23.736 16:20:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:23.736 16:20:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:23.736 16:20:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:23.737 16:20:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:23.737 16:20:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:23.737 16:20:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:23.737 16:20:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:23.737 16:20:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:23.737 16:20:03 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.671 Waiting for block devices as requested 00:03:24.671 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:24.929 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:24.929 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:25.187 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:25.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:25.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:25.187 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:25.446 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:25.446 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:25.446 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:25.446 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:25.704 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:25.704 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:25.704 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:25.962 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:25.962 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:25.962 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:26.220 16:20:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:26.220 16:20:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:26.220 16:20:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:26.220 16:20:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:26.220 16:20:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:26.220 16:20:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:26.220 16:20:05 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:26.220 16:20:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:26.220 16:20:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:26.220 16:20:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:26.220 16:20:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:26.220 16:20:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:26.220 16:20:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:26.220 16:20:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:26.220 16:20:05 -- common/autotest_common.sh@1557 -- # continue 00:03:26.220 16:20:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:26.220 16:20:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:26.220 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 16:20:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:26.220 16:20:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:26.220 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:03:26.220 16:20:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.595 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:27.595 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:27.595 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:28.530 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.530 16:20:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:28.530 16:20:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:28.530 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:03:28.530 16:20:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:28.530 16:20:07 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:28.530 16:20:07 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:28.530 16:20:07 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:28.530 16:20:07 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:28.530 16:20:07 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:28.530 16:20:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:28.530 16:20:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:28.530 16:20:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.530 16:20:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.530 16:20:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:28.530 16:20:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:28.530 16:20:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:28.530 16:20:08 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:28.530 16:20:08 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:28.530 16:20:08 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:28.530 16:20:08 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:28.530 16:20:08 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:28.530 16:20:08 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:28.530 16:20:08 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:28.530 16:20:08 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1386997 00:03:28.530 16:20:08 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:28.530 16:20:08 -- common/autotest_common.sh@1598 -- # waitforlisten 1386997 00:03:28.530 16:20:08 -- common/autotest_common.sh@829 -- # '[' -z 1386997 ']' 00:03:28.530 16:20:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.530 16:20:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:28.530 16:20:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.530 16:20:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:28.530 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:03:28.530 [2024-07-15 16:20:08.095998] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:28.530 [2024-07-15 16:20:08.096079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386997 ] 00:03:28.530 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.788 [2024-07-15 16:20:08.158532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.788 [2024-07-15 16:20:08.278014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.720 16:20:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:29.720 16:20:09 -- common/autotest_common.sh@862 -- # return 0 00:03:29.720 16:20:09 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:29.720 16:20:09 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:29.720 16:20:09 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:32.999 nvme0n1 00:03:32.999 16:20:12 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:32.999 [2024-07-15 16:20:12.336684] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:32.999 [2024-07-15 16:20:12.336731] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:32.999 request: 00:03:32.999 { 00:03:32.999 "nvme_ctrlr_name": "nvme0", 00:03:32.999 "password": "test", 00:03:32.999 "method": "bdev_nvme_opal_revert", 00:03:32.999 "req_id": 1 00:03:32.999 } 00:03:32.999 Got JSON-RPC error response 00:03:32.999 response: 00:03:32.999 { 00:03:32.999 "code": -32603, 00:03:32.999 "message": "Internal error" 00:03:32.999 } 00:03:32.999 16:20:12 -- common/autotest_common.sh@1604 -- # true 00:03:32.999 16:20:12 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:32.999 16:20:12 -- common/autotest_common.sh@1608 -- # killprocess 1386997 00:03:32.999 16:20:12 -- common/autotest_common.sh@948 -- # '[' -z 1386997 ']' 00:03:32.999 16:20:12 -- common/autotest_common.sh@952 -- # kill -0 1386997 00:03:32.999 16:20:12 -- common/autotest_common.sh@953 -- # uname 00:03:32.999 16:20:12 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:32.999 16:20:12 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1386997 00:03:32.999 16:20:12 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:32.999 16:20:12 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:32.999 16:20:12 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1386997' 00:03:32.999 killing process with pid 1386997 00:03:32.999 16:20:12 -- common/autotest_common.sh@967 -- # kill 1386997 00:03:32.999 16:20:12 -- common/autotest_common.sh@972 -- # wait 1386997 00:03:34.894 16:20:14 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:34.894 16:20:14 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:34.894 16:20:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:34.894 16:20:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:34.894 16:20:14 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:34.894 16:20:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:34.894 16:20:14 -- common/autotest_common.sh@10 -- # set +x 00:03:34.894 16:20:14 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:34.894 16:20:14 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.894 16:20:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.894 16:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.894 16:20:14 -- common/autotest_common.sh@10 -- # set +x 00:03:34.894 ************************************ 00:03:34.894 START TEST env 00:03:34.894 ************************************ 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.894 * Looking for test storage... 00:03:34.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:34.894 16:20:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.894 16:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.894 ************************************ 00:03:34.894 START TEST env_memory 00:03:34.894 ************************************ 00:03:34.894 16:20:14 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.894 00:03:34.894 00:03:34.894 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.894 http://cunit.sourceforge.net/ 00:03:34.894 00:03:34.894 00:03:34.894 Suite: memory 00:03:34.894 Test: alloc and free memory map ...[2024-07-15 16:20:14.333511] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:34.894 passed 00:03:34.894 Test: mem map translation ...[2024-07-15 16:20:14.353972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:34.894 [2024-07-15 16:20:14.353993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:34.894 [2024-07-15 16:20:14.354048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:34.894 [2024-07-15 16:20:14.354060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:34.894 passed 00:03:34.894 Test: mem map registration ...[2024-07-15 16:20:14.395046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:34.894 [2024-07-15 16:20:14.395065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:34.894 passed 00:03:34.894 Test: mem map adjacent registrations ...passed 00:03:34.894 00:03:34.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.894 suites 1 1 n/a 0 0 00:03:34.894 tests 4 4 4 0 0 00:03:34.894 asserts 152 152 152 0 n/a 00:03:34.894 00:03:34.894 Elapsed time = 0.138 seconds 00:03:34.894 00:03:34.894 real 0m0.144s 00:03:34.894 user 0m0.137s 00:03:34.894 sys 0m0.007s 00:03:34.894 16:20:14 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.894 16:20:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:34.894 ************************************ 00:03:34.894 END TEST env_memory 00:03:34.894 ************************************ 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1142 -- # return 0 00:03:34.894 16:20:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.894 16:20:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.894 16:20:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.152 ************************************ 00:03:35.152 START TEST env_vtophys 00:03:35.152 ************************************ 00:03:35.153 16:20:14 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:35.153 EAL: lib.eal log level changed from notice to debug 00:03:35.153 EAL: Detected lcore 0 as core 0 on socket 0 00:03:35.153 EAL: Detected lcore 1 as core 1 on socket 0 00:03:35.153 EAL: Detected lcore 2 as core 2 on socket 0 00:03:35.153 EAL: Detected lcore 3 as core 3 on socket 0 00:03:35.153 EAL: Detected lcore 4 as core 4 on socket 0 00:03:35.153 EAL: Detected lcore 5 as core 5 on socket 0 00:03:35.153 EAL: Detected lcore 6 as core 8 on socket 0 00:03:35.153 EAL: Detected lcore 7 as core 9 on socket 0 00:03:35.153 EAL: Detected lcore 8 as core 10 on socket 0 00:03:35.153 EAL: Detected lcore 9 as core 11 on socket 0 00:03:35.153 EAL: Detected lcore 10 as core 12 on socket 0 00:03:35.153 EAL: Detected lcore 11 as core 13 on socket 0 00:03:35.153 EAL: Detected lcore 12 as core 0 on socket 1 00:03:35.153 EAL: Detected lcore 13 as core 1 on socket 1 00:03:35.153 EAL: Detected lcore 14 as core 2 on socket 1 00:03:35.153 EAL: Detected lcore 15 as core 3 on socket 1 00:03:35.153 EAL: Detected lcore 16 as core 4 on socket 1 00:03:35.153 EAL: Detected lcore 17 as core 5 on socket 1 00:03:35.153 EAL: Detected lcore 18 as core 8 on socket 1 00:03:35.153 EAL: Detected lcore 19 as core 9 on socket 1 00:03:35.153 EAL: Detected lcore 20 as core 10 on socket 1 00:03:35.153 EAL: Detected lcore 21 as core 11 on socket 1 00:03:35.153 EAL: Detected lcore 22 as core 12 on socket 1 00:03:35.153 EAL: Detected lcore 23 as core 13 on socket 1 00:03:35.153 EAL: Detected lcore 24 as core 0 on socket 0 00:03:35.153 EAL: Detected lcore 25 as core 1 on socket 0 00:03:35.153 EAL: Detected lcore 26 as core 2 on socket 0 00:03:35.153 EAL: Detected lcore 27 as core 3 on socket 0 00:03:35.153 EAL: Detected lcore 28 as core 4 on socket 0 00:03:35.153 EAL: Detected lcore 29 as core 5 on socket 0 00:03:35.153 EAL: Detected lcore 30 as core 8 on socket 0 00:03:35.153 EAL: Detected lcore 31 as core 9 on socket 0 00:03:35.153 EAL: Detected lcore 32 as core 10 on socket 0 00:03:35.153 EAL: Detected lcore 33 as core 11 on socket 0 00:03:35.153 EAL: Detected lcore 34 as core 12 on socket 0 00:03:35.153 EAL: Detected lcore 35 as core 13 on socket 0 00:03:35.153 EAL: Detected lcore 36 as core 0 on socket 1 00:03:35.153 EAL: Detected lcore 37 as core 1 on socket 1 00:03:35.153 EAL: Detected lcore 38 as core 2 on socket 1 00:03:35.153 EAL: Detected lcore 39 as core 3 on socket 1 00:03:35.153 EAL: Detected lcore 40 as core 4 on socket 1 00:03:35.153 EAL: Detected lcore 41 as core 5 on socket 1 00:03:35.153 EAL: Detected lcore 42 as core 8 on socket 1 00:03:35.153 EAL: Detected lcore 43 as core 9 on socket 1 00:03:35.153 EAL: Detected lcore 44 as core 10 on socket 1 00:03:35.153 EAL: Detected lcore 45 as core 11 on socket 1 00:03:35.153 EAL: Detected lcore 46 as core 12 on socket 1 00:03:35.153 EAL: Detected lcore 47 as core 13 on socket 1 00:03:35.153 EAL: Maximum logical cores by configuration: 128 00:03:35.153 EAL: Detected CPU lcores: 48 00:03:35.153 EAL: Detected NUMA nodes: 2 00:03:35.153 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:35.153 EAL: Detected shared linkage of DPDK 00:03:35.153 EAL: No shared files mode enabled, IPC will be disabled 00:03:35.153 EAL: Bus pci wants IOVA as 'DC' 00:03:35.153 EAL: Buses did not request a specific IOVA mode. 00:03:35.153 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:35.153 EAL: Selected IOVA mode 'VA' 00:03:35.153 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.153 EAL: Probing VFIO support... 00:03:35.153 EAL: IOMMU type 1 (Type 1) is supported 00:03:35.153 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:35.153 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:35.153 EAL: VFIO support initialized 00:03:35.153 EAL: Ask a virtual area of 0x2e000 bytes 00:03:35.153 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:35.153 EAL: Setting up physically contiguous memory... 00:03:35.153 EAL: Setting maximum number of open files to 524288 00:03:35.153 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:35.153 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:35.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:35.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:35.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.153 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:35.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.153 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:35.153 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:35.153 EAL: Hugepages will be freed exactly as allocated. 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: TSC frequency is ~2700000 KHz 00:03:35.153 EAL: Main lcore 0 is ready (tid=7f2422442a00;cpuset=[0]) 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 0 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 2MB 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:35.153 EAL: Mem event callback 'spdk:(nil)' registered 00:03:35.153 00:03:35.153 00:03:35.153 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.153 http://cunit.sourceforge.net/ 00:03:35.153 00:03:35.153 00:03:35.153 Suite: components_suite 00:03:35.153 Test: vtophys_malloc_test ...passed 00:03:35.153 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 4 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 4MB 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was shrunk by 4MB 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 4 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 6MB 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was shrunk by 6MB 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 4 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 10MB 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was shrunk by 10MB 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 4 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 18MB 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was shrunk by 18MB 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.153 EAL: Restoring previous memory policy: 4 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was expanded by 34MB 00:03:35.153 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.153 EAL: request: mp_malloc_sync 00:03:35.153 EAL: No shared files mode enabled, IPC is disabled 00:03:35.153 EAL: Heap on socket 0 was shrunk by 34MB 00:03:35.153 EAL: Trying to obtain current memory policy. 00:03:35.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.154 EAL: Restoring previous memory policy: 4 00:03:35.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.154 EAL: request: mp_malloc_sync 00:03:35.154 EAL: No shared files mode enabled, IPC is disabled 00:03:35.154 EAL: Heap on socket 0 was expanded by 66MB 00:03:35.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.154 EAL: request: mp_malloc_sync 00:03:35.154 EAL: No shared files mode enabled, IPC is disabled 00:03:35.154 EAL: Heap on socket 0 was shrunk by 66MB 00:03:35.154 EAL: Trying to obtain current memory policy. 00:03:35.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.154 EAL: Restoring previous memory policy: 4 00:03:35.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.154 EAL: request: mp_malloc_sync 00:03:35.154 EAL: No shared files mode enabled, IPC is disabled 00:03:35.154 EAL: Heap on socket 0 was expanded by 130MB 00:03:35.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.154 EAL: request: mp_malloc_sync 00:03:35.154 EAL: No shared files mode enabled, IPC is disabled 00:03:35.154 EAL: Heap on socket 0 was shrunk by 130MB 00:03:35.154 EAL: Trying to obtain current memory policy. 00:03:35.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.412 EAL: Restoring previous memory policy: 4 00:03:35.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.412 EAL: request: mp_malloc_sync 00:03:35.412 EAL: No shared files mode enabled, IPC is disabled 00:03:35.412 EAL: Heap on socket 0 was expanded by 258MB 00:03:35.412 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.412 EAL: request: mp_malloc_sync 00:03:35.412 EAL: No shared files mode enabled, IPC is disabled 00:03:35.412 EAL: Heap on socket 0 was shrunk by 258MB 00:03:35.412 EAL: Trying to obtain current memory policy. 00:03:35.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.669 EAL: Restoring previous memory policy: 4 00:03:35.669 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.669 EAL: request: mp_malloc_sync 00:03:35.669 EAL: No shared files mode enabled, IPC is disabled 00:03:35.669 EAL: Heap on socket 0 was expanded by 514MB 00:03:35.669 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.669 EAL: request: mp_malloc_sync 00:03:35.669 EAL: No shared files mode enabled, IPC is disabled 00:03:35.669 EAL: Heap on socket 0 was shrunk by 514MB 00:03:35.669 EAL: Trying to obtain current memory policy. 00:03:35.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.236 EAL: Restoring previous memory policy: 4 00:03:36.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.236 EAL: request: mp_malloc_sync 00:03:36.236 EAL: No shared files mode enabled, IPC is disabled 00:03:36.236 EAL: Heap on socket 0 was expanded by 1026MB 00:03:36.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.493 EAL: request: mp_malloc_sync 00:03:36.493 EAL: No shared files mode enabled, IPC is disabled 00:03:36.493 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:36.493 passed 00:03:36.493 00:03:36.493 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.493 suites 1 1 n/a 0 0 00:03:36.493 tests 2 2 2 0 0 00:03:36.493 asserts 497 497 497 0 n/a 00:03:36.493 00:03:36.493 Elapsed time = 1.356 seconds 00:03:36.493 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.493 EAL: request: mp_malloc_sync 00:03:36.493 EAL: No shared files mode enabled, IPC is disabled 00:03:36.493 EAL: Heap on socket 0 was shrunk by 2MB 00:03:36.493 EAL: No shared files mode enabled, IPC is disabled 00:03:36.493 EAL: No shared files mode enabled, IPC is disabled 00:03:36.493 EAL: No shared files mode enabled, IPC is disabled 00:03:36.493 00:03:36.493 real 0m1.476s 00:03:36.493 user 0m0.846s 00:03:36.493 sys 0m0.590s 00:03:36.493 16:20:15 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.493 16:20:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:36.493 ************************************ 00:03:36.493 END TEST env_vtophys 00:03:36.493 ************************************ 00:03:36.493 16:20:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:36.493 16:20:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:36.493 16:20:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.493 16:20:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.494 16:20:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.494 ************************************ 00:03:36.494 START TEST env_pci 00:03:36.494 ************************************ 00:03:36.494 16:20:16 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:36.494 00:03:36.494 00:03:36.494 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.494 http://cunit.sourceforge.net/ 00:03:36.494 00:03:36.494 00:03:36.494 Suite: pci 00:03:36.494 Test: pci_hook ...[2024-07-15 16:20:16.028452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1388017 has claimed it 00:03:36.494 EAL: Cannot find device (10000:00:01.0) 00:03:36.494 EAL: Failed to attach device on primary process 00:03:36.494 passed 00:03:36.494 00:03:36.494 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.494 suites 1 1 n/a 0 0 00:03:36.494 tests 1 1 1 0 0 00:03:36.494 asserts 25 25 25 0 n/a 00:03:36.494 00:03:36.494 Elapsed time = 0.022 seconds 00:03:36.494 00:03:36.494 real 0m0.034s 00:03:36.494 user 0m0.008s 00:03:36.494 sys 0m0.026s 00:03:36.494 16:20:16 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.494 16:20:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:36.494 ************************************ 00:03:36.494 END TEST env_pci 00:03:36.494 ************************************ 00:03:36.494 16:20:16 env -- common/autotest_common.sh@1142 -- # return 0 00:03:36.494 16:20:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:36.494 16:20:16 env -- env/env.sh@15 -- # uname 00:03:36.494 16:20:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:36.494 16:20:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:36.494 16:20:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:36.494 16:20:16 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:36.494 16:20:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.494 16:20:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.751 ************************************ 00:03:36.751 START TEST env_dpdk_post_init 00:03:36.751 ************************************ 00:03:36.751 16:20:16 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:36.751 EAL: Detected CPU lcores: 48 00:03:36.751 EAL: Detected NUMA nodes: 2 00:03:36.751 EAL: Detected shared linkage of DPDK 00:03:36.751 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:36.751 EAL: Selected IOVA mode 'VA' 00:03:36.751 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.751 EAL: VFIO support initialized 00:03:36.751 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:36.751 EAL: Using IOMMU type 1 (Type 1) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:36.751 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:37.017 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:37.017 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:37.017 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:37.017 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:37.631 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:40.917 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:40.917 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:40.917 Starting DPDK initialization... 00:03:40.917 Starting SPDK post initialization... 00:03:40.917 SPDK NVMe probe 00:03:40.917 Attaching to 0000:88:00.0 00:03:40.917 Attached to 0000:88:00.0 00:03:40.917 Cleaning up... 00:03:40.917 00:03:40.917 real 0m4.397s 00:03:40.917 user 0m3.287s 00:03:40.917 sys 0m0.165s 00:03:40.917 16:20:20 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.917 16:20:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:40.917 ************************************ 00:03:40.917 END TEST env_dpdk_post_init 00:03:40.917 ************************************ 00:03:40.917 16:20:20 env -- common/autotest_common.sh@1142 -- # return 0 00:03:40.917 16:20:20 env -- env/env.sh@26 -- # uname 00:03:40.917 16:20:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:40.917 16:20:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.917 16:20:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.917 16:20:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.176 16:20:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.176 ************************************ 00:03:41.176 START TEST env_mem_callbacks 00:03:41.176 ************************************ 00:03:41.176 16:20:20 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.176 EAL: Detected CPU lcores: 48 00:03:41.176 EAL: Detected NUMA nodes: 2 00:03:41.176 EAL: Detected shared linkage of DPDK 00:03:41.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.176 EAL: Selected IOVA mode 'VA' 00:03:41.176 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.176 EAL: VFIO support initialized 00:03:41.176 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.176 00:03:41.176 00:03:41.176 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.176 http://cunit.sourceforge.net/ 00:03:41.176 00:03:41.176 00:03:41.176 Suite: memory 00:03:41.176 Test: test ... 00:03:41.176 register 0x200000200000 2097152 00:03:41.176 malloc 3145728 00:03:41.176 register 0x200000400000 4194304 00:03:41.176 buf 0x200000500000 len 3145728 PASSED 00:03:41.176 malloc 64 00:03:41.176 buf 0x2000004fff40 len 64 PASSED 00:03:41.176 malloc 4194304 00:03:41.176 register 0x200000800000 6291456 00:03:41.176 buf 0x200000a00000 len 4194304 PASSED 00:03:41.176 free 0x200000500000 3145728 00:03:41.176 free 0x2000004fff40 64 00:03:41.176 unregister 0x200000400000 4194304 PASSED 00:03:41.176 free 0x200000a00000 4194304 00:03:41.176 unregister 0x200000800000 6291456 PASSED 00:03:41.176 malloc 8388608 00:03:41.176 register 0x200000400000 10485760 00:03:41.176 buf 0x200000600000 len 8388608 PASSED 00:03:41.176 free 0x200000600000 8388608 00:03:41.176 unregister 0x200000400000 10485760 PASSED 00:03:41.176 passed 00:03:41.176 00:03:41.176 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.176 suites 1 1 n/a 0 0 00:03:41.176 tests 1 1 1 0 0 00:03:41.176 asserts 15 15 15 0 n/a 00:03:41.176 00:03:41.176 Elapsed time = 0.005 seconds 00:03:41.176 00:03:41.176 real 0m0.043s 00:03:41.176 user 0m0.016s 00:03:41.176 sys 0m0.027s 00:03:41.176 16:20:20 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.176 16:20:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.176 ************************************ 00:03:41.176 END TEST env_mem_callbacks 00:03:41.176 ************************************ 00:03:41.176 16:20:20 env -- common/autotest_common.sh@1142 -- # return 0 00:03:41.176 00:03:41.176 real 0m6.375s 00:03:41.176 user 0m4.415s 00:03:41.176 sys 0m0.991s 00:03:41.176 16:20:20 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.176 16:20:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.176 ************************************ 00:03:41.176 END TEST env 00:03:41.176 ************************************ 00:03:41.176 16:20:20 -- common/autotest_common.sh@1142 -- # return 0 00:03:41.176 16:20:20 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.176 16:20:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.176 16:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.176 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.176 ************************************ 00:03:41.176 START TEST rpc 00:03:41.176 ************************************ 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.177 * Looking for test storage... 00:03:41.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.177 16:20:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1388671 00:03:41.177 16:20:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:41.177 16:20:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.177 16:20:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1388671 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@829 -- # '[' -z 1388671 ']' 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:41.177 16:20:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.177 [2024-07-15 16:20:20.747379] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:41.177 [2024-07-15 16:20:20.747458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388671 ] 00:03:41.177 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.436 [2024-07-15 16:20:20.805180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.436 [2024-07-15 16:20:20.912946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:41.436 [2024-07-15 16:20:20.913033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1388671' to capture a snapshot of events at runtime. 00:03:41.436 [2024-07-15 16:20:20.913050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:41.436 [2024-07-15 16:20:20.913063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:41.436 [2024-07-15 16:20:20.913074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1388671 for offline analysis/debug. 00:03:41.436 [2024-07-15 16:20:20.913107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.696 16:20:21 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:41.696 16:20:21 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:41.696 16:20:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.696 16:20:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.696 16:20:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:41.697 16:20:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:41.697 16:20:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.697 16:20:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.697 16:20:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.697 ************************************ 00:03:41.697 START TEST rpc_integrity 00:03:41.697 ************************************ 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.697 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.697 { 00:03:41.697 "name": "Malloc0", 00:03:41.697 "aliases": [ 00:03:41.697 "5a6428e4-8c37-4fb5-bb73-1cee20c02b24" 00:03:41.697 ], 00:03:41.697 "product_name": "Malloc disk", 00:03:41.697 "block_size": 512, 00:03:41.697 "num_blocks": 16384, 00:03:41.697 "uuid": "5a6428e4-8c37-4fb5-bb73-1cee20c02b24", 00:03:41.697 "assigned_rate_limits": { 00:03:41.697 "rw_ios_per_sec": 0, 00:03:41.697 "rw_mbytes_per_sec": 0, 00:03:41.697 "r_mbytes_per_sec": 0, 00:03:41.697 "w_mbytes_per_sec": 0 00:03:41.697 }, 00:03:41.697 "claimed": false, 00:03:41.697 "zoned": false, 00:03:41.697 "supported_io_types": { 00:03:41.697 "read": true, 00:03:41.697 "write": true, 00:03:41.697 "unmap": true, 00:03:41.697 "flush": true, 00:03:41.697 "reset": true, 00:03:41.697 "nvme_admin": false, 00:03:41.697 "nvme_io": false, 00:03:41.697 "nvme_io_md": false, 00:03:41.697 "write_zeroes": true, 00:03:41.697 "zcopy": true, 00:03:41.697 "get_zone_info": false, 00:03:41.697 "zone_management": false, 00:03:41.697 "zone_append": false, 00:03:41.697 "compare": false, 00:03:41.697 "compare_and_write": false, 00:03:41.697 "abort": true, 00:03:41.697 "seek_hole": false, 00:03:41.697 "seek_data": false, 00:03:41.697 "copy": true, 00:03:41.697 "nvme_iov_md": false 00:03:41.697 }, 00:03:41.697 "memory_domains": [ 00:03:41.697 { 00:03:41.697 "dma_device_id": "system", 00:03:41.697 "dma_device_type": 1 00:03:41.697 }, 00:03:41.697 { 00:03:41.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.697 "dma_device_type": 2 00:03:41.697 } 00:03:41.697 ], 00:03:41.697 "driver_specific": {} 00:03:41.697 } 00:03:41.697 ]' 00:03:41.697 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 [2024-07-15 16:20:21.326727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:41.958 [2024-07-15 16:20:21.326774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.958 [2024-07-15 16:20:21.326797] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e52d50 00:03:41.958 [2024-07-15 16:20:21.326812] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.958 [2024-07-15 16:20:21.328570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.958 [2024-07-15 16:20:21.328598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.958 Passthru0 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.958 { 00:03:41.958 "name": "Malloc0", 00:03:41.958 "aliases": [ 00:03:41.958 "5a6428e4-8c37-4fb5-bb73-1cee20c02b24" 00:03:41.958 ], 00:03:41.958 "product_name": "Malloc disk", 00:03:41.958 "block_size": 512, 00:03:41.958 "num_blocks": 16384, 00:03:41.958 "uuid": "5a6428e4-8c37-4fb5-bb73-1cee20c02b24", 00:03:41.958 "assigned_rate_limits": { 00:03:41.958 "rw_ios_per_sec": 0, 00:03:41.958 "rw_mbytes_per_sec": 0, 00:03:41.958 "r_mbytes_per_sec": 0, 00:03:41.958 "w_mbytes_per_sec": 0 00:03:41.958 }, 00:03:41.958 "claimed": true, 00:03:41.958 "claim_type": "exclusive_write", 00:03:41.958 "zoned": false, 00:03:41.958 "supported_io_types": { 00:03:41.958 "read": true, 00:03:41.958 "write": true, 00:03:41.958 "unmap": true, 00:03:41.958 "flush": true, 00:03:41.958 "reset": true, 00:03:41.958 "nvme_admin": false, 00:03:41.958 "nvme_io": false, 00:03:41.958 "nvme_io_md": false, 00:03:41.958 "write_zeroes": true, 00:03:41.958 "zcopy": true, 00:03:41.958 "get_zone_info": false, 00:03:41.958 "zone_management": false, 00:03:41.958 "zone_append": false, 00:03:41.958 "compare": false, 00:03:41.958 "compare_and_write": false, 00:03:41.958 "abort": true, 00:03:41.958 "seek_hole": false, 00:03:41.958 "seek_data": false, 00:03:41.958 "copy": true, 00:03:41.958 "nvme_iov_md": false 00:03:41.958 }, 00:03:41.958 "memory_domains": [ 00:03:41.958 { 00:03:41.958 "dma_device_id": "system", 00:03:41.958 "dma_device_type": 1 00:03:41.958 }, 00:03:41.958 { 00:03:41.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.958 "dma_device_type": 2 00:03:41.958 } 00:03:41.958 ], 00:03:41.958 "driver_specific": {} 00:03:41.958 }, 00:03:41.958 { 00:03:41.958 "name": "Passthru0", 00:03:41.958 "aliases": [ 00:03:41.958 "9ec93a06-a62b-598e-b04d-cd2ae8e93ec2" 00:03:41.958 ], 00:03:41.958 "product_name": "passthru", 00:03:41.958 "block_size": 512, 00:03:41.958 "num_blocks": 16384, 00:03:41.958 "uuid": "9ec93a06-a62b-598e-b04d-cd2ae8e93ec2", 00:03:41.958 "assigned_rate_limits": { 00:03:41.958 "rw_ios_per_sec": 0, 00:03:41.958 "rw_mbytes_per_sec": 0, 00:03:41.958 "r_mbytes_per_sec": 0, 00:03:41.958 "w_mbytes_per_sec": 0 00:03:41.958 }, 00:03:41.958 "claimed": false, 00:03:41.958 "zoned": false, 00:03:41.958 "supported_io_types": { 00:03:41.958 "read": true, 00:03:41.958 "write": true, 00:03:41.958 "unmap": true, 00:03:41.958 "flush": true, 00:03:41.958 "reset": true, 00:03:41.958 "nvme_admin": false, 00:03:41.958 "nvme_io": false, 00:03:41.958 "nvme_io_md": false, 00:03:41.958 "write_zeroes": true, 00:03:41.958 "zcopy": true, 00:03:41.958 "get_zone_info": false, 00:03:41.958 "zone_management": false, 00:03:41.958 "zone_append": false, 00:03:41.958 "compare": false, 00:03:41.958 "compare_and_write": false, 00:03:41.958 "abort": true, 00:03:41.958 "seek_hole": false, 00:03:41.958 "seek_data": false, 00:03:41.958 "copy": true, 00:03:41.958 "nvme_iov_md": false 00:03:41.958 }, 00:03:41.958 "memory_domains": [ 00:03:41.958 { 00:03:41.958 "dma_device_id": "system", 00:03:41.958 "dma_device_type": 1 00:03:41.958 }, 00:03:41.958 { 00:03:41.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.958 "dma_device_type": 2 00:03:41.958 } 00:03:41.958 ], 00:03:41.958 "driver_specific": { 00:03:41.958 "passthru": { 00:03:41.958 "name": "Passthru0", 00:03:41.958 "base_bdev_name": "Malloc0" 00:03:41.958 } 00:03:41.958 } 00:03:41.958 } 00:03:41.958 ]' 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.958 16:20:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.958 00:03:41.958 real 0m0.234s 00:03:41.958 user 0m0.156s 00:03:41.958 sys 0m0.020s 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 ************************************ 00:03:41.958 END TEST rpc_integrity 00:03:41.958 ************************************ 00:03:41.958 16:20:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:41.958 16:20:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.958 16:20:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.958 16:20:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.958 16:20:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 ************************************ 00:03:41.958 START TEST rpc_plugins 00:03:41.958 ************************************ 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.958 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.958 { 00:03:41.958 "name": "Malloc1", 00:03:41.958 "aliases": [ 00:03:41.958 "f0cc44e3-ff43-46cd-8f08-85fcedf23713" 00:03:41.958 ], 00:03:41.958 "product_name": "Malloc disk", 00:03:41.958 "block_size": 4096, 00:03:41.958 "num_blocks": 256, 00:03:41.958 "uuid": "f0cc44e3-ff43-46cd-8f08-85fcedf23713", 00:03:41.958 "assigned_rate_limits": { 00:03:41.958 "rw_ios_per_sec": 0, 00:03:41.958 "rw_mbytes_per_sec": 0, 00:03:41.958 "r_mbytes_per_sec": 0, 00:03:41.958 "w_mbytes_per_sec": 0 00:03:41.958 }, 00:03:41.958 "claimed": false, 00:03:41.958 "zoned": false, 00:03:41.958 "supported_io_types": { 00:03:41.958 "read": true, 00:03:41.958 "write": true, 00:03:41.958 "unmap": true, 00:03:41.958 "flush": true, 00:03:41.958 "reset": true, 00:03:41.958 "nvme_admin": false, 00:03:41.958 "nvme_io": false, 00:03:41.958 "nvme_io_md": false, 00:03:41.958 "write_zeroes": true, 00:03:41.958 "zcopy": true, 00:03:41.958 "get_zone_info": false, 00:03:41.958 "zone_management": false, 00:03:41.958 "zone_append": false, 00:03:41.958 "compare": false, 00:03:41.958 "compare_and_write": false, 00:03:41.958 "abort": true, 00:03:41.958 "seek_hole": false, 00:03:41.958 "seek_data": false, 00:03:41.958 "copy": true, 00:03:41.958 "nvme_iov_md": false 00:03:41.958 }, 00:03:41.958 "memory_domains": [ 00:03:41.958 { 00:03:41.958 "dma_device_id": "system", 00:03:41.958 "dma_device_type": 1 00:03:41.958 }, 00:03:41.958 { 00:03:41.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.958 "dma_device_type": 2 00:03:41.958 } 00:03:41.958 ], 00:03:41.958 "driver_specific": {} 00:03:41.958 } 00:03:41.958 ]' 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.958 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.959 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.959 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.219 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.219 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:42.219 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:42.219 16:20:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:42.219 00:03:42.219 real 0m0.112s 00:03:42.219 user 0m0.073s 00:03:42.219 sys 0m0.010s 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.219 16:20:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.219 ************************************ 00:03:42.219 END TEST rpc_plugins 00:03:42.219 ************************************ 00:03:42.219 16:20:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.219 16:20:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:42.219 16:20:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.219 16:20:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.219 16:20:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.219 ************************************ 00:03:42.219 START TEST rpc_trace_cmd_test 00:03:42.219 ************************************ 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:42.219 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1388671", 00:03:42.219 "tpoint_group_mask": "0x8", 00:03:42.219 "iscsi_conn": { 00:03:42.219 "mask": "0x2", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "scsi": { 00:03:42.219 "mask": "0x4", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "bdev": { 00:03:42.219 "mask": "0x8", 00:03:42.219 "tpoint_mask": "0xffffffffffffffff" 00:03:42.219 }, 00:03:42.219 "nvmf_rdma": { 00:03:42.219 "mask": "0x10", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "nvmf_tcp": { 00:03:42.219 "mask": "0x20", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "ftl": { 00:03:42.219 "mask": "0x40", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "blobfs": { 00:03:42.219 "mask": "0x80", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "dsa": { 00:03:42.219 "mask": "0x200", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "thread": { 00:03:42.219 "mask": "0x400", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "nvme_pcie": { 00:03:42.219 "mask": "0x800", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "iaa": { 00:03:42.219 "mask": "0x1000", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "nvme_tcp": { 00:03:42.219 "mask": "0x2000", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "bdev_nvme": { 00:03:42.219 "mask": "0x4000", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 }, 00:03:42.219 "sock": { 00:03:42.219 "mask": "0x8000", 00:03:42.219 "tpoint_mask": "0x0" 00:03:42.219 } 00:03:42.219 }' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:42.219 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:42.480 16:20:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:42.480 00:03:42.480 real 0m0.192s 00:03:42.480 user 0m0.172s 00:03:42.480 sys 0m0.011s 00:03:42.480 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.480 16:20:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 ************************************ 00:03:42.480 END TEST rpc_trace_cmd_test 00:03:42.480 ************************************ 00:03:42.480 16:20:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.480 16:20:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:42.480 16:20:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:42.480 16:20:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:42.480 16:20:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.480 16:20:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.480 16:20:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 ************************************ 00:03:42.480 START TEST rpc_daemon_integrity 00:03:42.480 ************************************ 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.480 { 00:03:42.480 "name": "Malloc2", 00:03:42.480 "aliases": [ 00:03:42.480 "e4d63490-d01d-452c-b6fe-f148274148e0" 00:03:42.480 ], 00:03:42.480 "product_name": "Malloc disk", 00:03:42.480 "block_size": 512, 00:03:42.480 "num_blocks": 16384, 00:03:42.480 "uuid": "e4d63490-d01d-452c-b6fe-f148274148e0", 00:03:42.480 "assigned_rate_limits": { 00:03:42.480 "rw_ios_per_sec": 0, 00:03:42.480 "rw_mbytes_per_sec": 0, 00:03:42.480 "r_mbytes_per_sec": 0, 00:03:42.480 "w_mbytes_per_sec": 0 00:03:42.480 }, 00:03:42.480 "claimed": false, 00:03:42.480 "zoned": false, 00:03:42.480 "supported_io_types": { 00:03:42.480 "read": true, 00:03:42.480 "write": true, 00:03:42.480 "unmap": true, 00:03:42.480 "flush": true, 00:03:42.480 "reset": true, 00:03:42.480 "nvme_admin": false, 00:03:42.480 "nvme_io": false, 00:03:42.480 "nvme_io_md": false, 00:03:42.480 "write_zeroes": true, 00:03:42.480 "zcopy": true, 00:03:42.480 "get_zone_info": false, 00:03:42.480 "zone_management": false, 00:03:42.480 "zone_append": false, 00:03:42.480 "compare": false, 00:03:42.480 "compare_and_write": false, 00:03:42.480 "abort": true, 00:03:42.480 "seek_hole": false, 00:03:42.480 "seek_data": false, 00:03:42.480 "copy": true, 00:03:42.480 "nvme_iov_md": false 00:03:42.480 }, 00:03:42.480 "memory_domains": [ 00:03:42.480 { 00:03:42.480 "dma_device_id": "system", 00:03:42.480 "dma_device_type": 1 00:03:42.480 }, 00:03:42.480 { 00:03:42.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.480 "dma_device_type": 2 00:03:42.480 } 00:03:42.480 ], 00:03:42.480 "driver_specific": {} 00:03:42.480 } 00:03:42.480 ]' 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.480 16:20:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 [2024-07-15 16:20:21.996635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:42.480 [2024-07-15 16:20:21.996681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.480 [2024-07-15 16:20:21.996705] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e53c00 00:03:42.480 [2024-07-15 16:20:21.996720] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.480 [2024-07-15 16:20:21.998067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.480 [2024-07-15 16:20:21.998093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.480 Passthru0 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.480 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.480 { 00:03:42.480 "name": "Malloc2", 00:03:42.480 "aliases": [ 00:03:42.480 "e4d63490-d01d-452c-b6fe-f148274148e0" 00:03:42.480 ], 00:03:42.480 "product_name": "Malloc disk", 00:03:42.480 "block_size": 512, 00:03:42.480 "num_blocks": 16384, 00:03:42.480 "uuid": "e4d63490-d01d-452c-b6fe-f148274148e0", 00:03:42.480 "assigned_rate_limits": { 00:03:42.480 "rw_ios_per_sec": 0, 00:03:42.480 "rw_mbytes_per_sec": 0, 00:03:42.480 "r_mbytes_per_sec": 0, 00:03:42.480 "w_mbytes_per_sec": 0 00:03:42.480 }, 00:03:42.480 "claimed": true, 00:03:42.480 "claim_type": "exclusive_write", 00:03:42.480 "zoned": false, 00:03:42.480 "supported_io_types": { 00:03:42.480 "read": true, 00:03:42.480 "write": true, 00:03:42.480 "unmap": true, 00:03:42.480 "flush": true, 00:03:42.480 "reset": true, 00:03:42.480 "nvme_admin": false, 00:03:42.480 "nvme_io": false, 00:03:42.480 "nvme_io_md": false, 00:03:42.480 "write_zeroes": true, 00:03:42.480 "zcopy": true, 00:03:42.480 "get_zone_info": false, 00:03:42.480 "zone_management": false, 00:03:42.480 "zone_append": false, 00:03:42.480 "compare": false, 00:03:42.480 "compare_and_write": false, 00:03:42.480 "abort": true, 00:03:42.480 "seek_hole": false, 00:03:42.480 "seek_data": false, 00:03:42.480 "copy": true, 00:03:42.480 "nvme_iov_md": false 00:03:42.480 }, 00:03:42.480 "memory_domains": [ 00:03:42.480 { 00:03:42.480 "dma_device_id": "system", 00:03:42.480 "dma_device_type": 1 00:03:42.480 }, 00:03:42.480 { 00:03:42.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.480 "dma_device_type": 2 00:03:42.480 } 00:03:42.480 ], 00:03:42.480 "driver_specific": {} 00:03:42.480 }, 00:03:42.480 { 00:03:42.480 "name": "Passthru0", 00:03:42.480 "aliases": [ 00:03:42.480 "c41ca602-65c2-5c94-bfcb-caae2907219a" 00:03:42.480 ], 00:03:42.480 "product_name": "passthru", 00:03:42.480 "block_size": 512, 00:03:42.480 "num_blocks": 16384, 00:03:42.480 "uuid": "c41ca602-65c2-5c94-bfcb-caae2907219a", 00:03:42.480 "assigned_rate_limits": { 00:03:42.480 "rw_ios_per_sec": 0, 00:03:42.480 "rw_mbytes_per_sec": 0, 00:03:42.480 "r_mbytes_per_sec": 0, 00:03:42.480 "w_mbytes_per_sec": 0 00:03:42.480 }, 00:03:42.480 "claimed": false, 00:03:42.480 "zoned": false, 00:03:42.480 "supported_io_types": { 00:03:42.480 "read": true, 00:03:42.480 "write": true, 00:03:42.480 "unmap": true, 00:03:42.480 "flush": true, 00:03:42.480 "reset": true, 00:03:42.480 "nvme_admin": false, 00:03:42.480 "nvme_io": false, 00:03:42.480 "nvme_io_md": false, 00:03:42.480 "write_zeroes": true, 00:03:42.480 "zcopy": true, 00:03:42.480 "get_zone_info": false, 00:03:42.480 "zone_management": false, 00:03:42.480 "zone_append": false, 00:03:42.480 "compare": false, 00:03:42.480 "compare_and_write": false, 00:03:42.480 "abort": true, 00:03:42.481 "seek_hole": false, 00:03:42.481 "seek_data": false, 00:03:42.481 "copy": true, 00:03:42.481 "nvme_iov_md": false 00:03:42.481 }, 00:03:42.481 "memory_domains": [ 00:03:42.481 { 00:03:42.481 "dma_device_id": "system", 00:03:42.481 "dma_device_type": 1 00:03:42.481 }, 00:03:42.481 { 00:03:42.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.481 "dma_device_type": 2 00:03:42.481 } 00:03:42.481 ], 00:03:42.481 "driver_specific": { 00:03:42.481 "passthru": { 00:03:42.481 "name": "Passthru0", 00:03:42.481 "base_bdev_name": "Malloc2" 00:03:42.481 } 00:03:42.481 } 00:03:42.481 } 00:03:42.481 ]' 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:42.481 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.740 00:03:42.740 real 0m0.223s 00:03:42.740 user 0m0.149s 00:03:42.740 sys 0m0.021s 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.740 16:20:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.740 ************************************ 00:03:42.740 END TEST rpc_daemon_integrity 00:03:42.740 ************************************ 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:42.740 16:20:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.740 16:20:22 rpc -- rpc/rpc.sh@84 -- # killprocess 1388671 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@948 -- # '[' -z 1388671 ']' 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@952 -- # kill -0 1388671 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@953 -- # uname 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1388671 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1388671' 00:03:42.740 killing process with pid 1388671 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@967 -- # kill 1388671 00:03:42.740 16:20:22 rpc -- common/autotest_common.sh@972 -- # wait 1388671 00:03:43.308 00:03:43.308 real 0m1.973s 00:03:43.308 user 0m2.473s 00:03:43.308 sys 0m0.570s 00:03:43.308 16:20:22 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.308 16:20:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.308 ************************************ 00:03:43.308 END TEST rpc 00:03:43.308 ************************************ 00:03:43.308 16:20:22 -- common/autotest_common.sh@1142 -- # return 0 00:03:43.308 16:20:22 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.308 16:20:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.308 16:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.308 16:20:22 -- common/autotest_common.sh@10 -- # set +x 00:03:43.308 ************************************ 00:03:43.308 START TEST skip_rpc 00:03:43.308 ************************************ 00:03:43.308 16:20:22 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.308 * Looking for test storage... 00:03:43.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:43.308 16:20:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.308 16:20:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:43.308 16:20:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:43.308 16:20:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.308 16:20:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.308 16:20:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.308 ************************************ 00:03:43.308 START TEST skip_rpc 00:03:43.308 ************************************ 00:03:43.308 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:43.308 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1389108 00:03:43.308 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:43.308 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.308 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:43.308 [2024-07-15 16:20:22.798277] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:43.309 [2024-07-15 16:20:22.798357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389108 ] 00:03:43.309 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.309 [2024-07-15 16:20:22.857510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.565 [2024-07-15 16:20:22.973149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1389108 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1389108 ']' 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1389108 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389108 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389108' 00:03:48.843 killing process with pid 1389108 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1389108 00:03:48.843 16:20:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1389108 00:03:48.843 00:03:48.843 real 0m5.487s 00:03:48.843 user 0m5.174s 00:03:48.843 sys 0m0.319s 00:03:48.843 16:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.843 16:20:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.844 ************************************ 00:03:48.844 END TEST skip_rpc 00:03:48.844 ************************************ 00:03:48.844 16:20:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:48.844 16:20:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:48.844 16:20:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.844 16:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.844 16:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.844 ************************************ 00:03:48.844 START TEST skip_rpc_with_json 00:03:48.844 ************************************ 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1389802 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1389802 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1389802 ']' 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.844 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.844 [2024-07-15 16:20:28.336399] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:48.844 [2024-07-15 16:20:28.336462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389802 ] 00:03:48.844 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.844 [2024-07-15 16:20:28.391853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.103 [2024-07-15 16:20:28.502575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.362 [2024-07-15 16:20:28.775192] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:49.362 request: 00:03:49.362 { 00:03:49.362 "trtype": "tcp", 00:03:49.362 "method": "nvmf_get_transports", 00:03:49.362 "req_id": 1 00:03:49.362 } 00:03:49.362 Got JSON-RPC error response 00:03:49.362 response: 00:03:49.362 { 00:03:49.362 "code": -19, 00:03:49.362 "message": "No such device" 00:03:49.362 } 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.362 [2024-07-15 16:20:28.783317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.362 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.362 { 00:03:49.362 "subsystems": [ 00:03:49.362 { 00:03:49.362 "subsystem": "vfio_user_target", 00:03:49.362 "config": null 00:03:49.362 }, 00:03:49.362 { 00:03:49.362 "subsystem": "keyring", 00:03:49.362 "config": [] 00:03:49.362 }, 00:03:49.362 { 00:03:49.362 "subsystem": "iobuf", 00:03:49.362 "config": [ 00:03:49.362 { 00:03:49.362 "method": "iobuf_set_options", 00:03:49.362 "params": { 00:03:49.362 "small_pool_count": 8192, 00:03:49.362 "large_pool_count": 1024, 00:03:49.362 "small_bufsize": 8192, 00:03:49.363 "large_bufsize": 135168 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "sock", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "sock_set_default_impl", 00:03:49.363 "params": { 00:03:49.363 "impl_name": "posix" 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "sock_impl_set_options", 00:03:49.363 "params": { 00:03:49.363 "impl_name": "ssl", 00:03:49.363 "recv_buf_size": 4096, 00:03:49.363 "send_buf_size": 4096, 00:03:49.363 "enable_recv_pipe": true, 00:03:49.363 "enable_quickack": false, 00:03:49.363 "enable_placement_id": 0, 00:03:49.363 "enable_zerocopy_send_server": true, 00:03:49.363 "enable_zerocopy_send_client": false, 00:03:49.363 "zerocopy_threshold": 0, 00:03:49.363 "tls_version": 0, 00:03:49.363 "enable_ktls": false 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "sock_impl_set_options", 00:03:49.363 "params": { 00:03:49.363 "impl_name": "posix", 00:03:49.363 "recv_buf_size": 2097152, 00:03:49.363 "send_buf_size": 2097152, 00:03:49.363 "enable_recv_pipe": true, 00:03:49.363 "enable_quickack": false, 00:03:49.363 "enable_placement_id": 0, 00:03:49.363 "enable_zerocopy_send_server": true, 00:03:49.363 "enable_zerocopy_send_client": false, 00:03:49.363 "zerocopy_threshold": 0, 00:03:49.363 "tls_version": 0, 00:03:49.363 "enable_ktls": false 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "vmd", 00:03:49.363 "config": [] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "accel", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "accel_set_options", 00:03:49.363 "params": { 00:03:49.363 "small_cache_size": 128, 00:03:49.363 "large_cache_size": 16, 00:03:49.363 "task_count": 2048, 00:03:49.363 "sequence_count": 2048, 00:03:49.363 "buf_count": 2048 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "bdev", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "bdev_set_options", 00:03:49.363 "params": { 00:03:49.363 "bdev_io_pool_size": 65535, 00:03:49.363 "bdev_io_cache_size": 256, 00:03:49.363 "bdev_auto_examine": true, 00:03:49.363 "iobuf_small_cache_size": 128, 00:03:49.363 "iobuf_large_cache_size": 16 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "bdev_raid_set_options", 00:03:49.363 "params": { 00:03:49.363 "process_window_size_kb": 1024 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "bdev_iscsi_set_options", 00:03:49.363 "params": { 00:03:49.363 "timeout_sec": 30 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "bdev_nvme_set_options", 00:03:49.363 "params": { 00:03:49.363 "action_on_timeout": "none", 00:03:49.363 "timeout_us": 0, 00:03:49.363 "timeout_admin_us": 0, 00:03:49.363 "keep_alive_timeout_ms": 10000, 00:03:49.363 "arbitration_burst": 0, 00:03:49.363 "low_priority_weight": 0, 00:03:49.363 "medium_priority_weight": 0, 00:03:49.363 "high_priority_weight": 0, 00:03:49.363 "nvme_adminq_poll_period_us": 10000, 00:03:49.363 "nvme_ioq_poll_period_us": 0, 00:03:49.363 "io_queue_requests": 0, 00:03:49.363 "delay_cmd_submit": true, 00:03:49.363 "transport_retry_count": 4, 00:03:49.363 "bdev_retry_count": 3, 00:03:49.363 "transport_ack_timeout": 0, 00:03:49.363 "ctrlr_loss_timeout_sec": 0, 00:03:49.363 "reconnect_delay_sec": 0, 00:03:49.363 "fast_io_fail_timeout_sec": 0, 00:03:49.363 "disable_auto_failback": false, 00:03:49.363 "generate_uuids": false, 00:03:49.363 "transport_tos": 0, 00:03:49.363 "nvme_error_stat": false, 00:03:49.363 "rdma_srq_size": 0, 00:03:49.363 "io_path_stat": false, 00:03:49.363 "allow_accel_sequence": false, 00:03:49.363 "rdma_max_cq_size": 0, 00:03:49.363 "rdma_cm_event_timeout_ms": 0, 00:03:49.363 "dhchap_digests": [ 00:03:49.363 "sha256", 00:03:49.363 "sha384", 00:03:49.363 "sha512" 00:03:49.363 ], 00:03:49.363 "dhchap_dhgroups": [ 00:03:49.363 "null", 00:03:49.363 "ffdhe2048", 00:03:49.363 "ffdhe3072", 00:03:49.363 "ffdhe4096", 00:03:49.363 "ffdhe6144", 00:03:49.363 "ffdhe8192" 00:03:49.363 ] 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "bdev_nvme_set_hotplug", 00:03:49.363 "params": { 00:03:49.363 "period_us": 100000, 00:03:49.363 "enable": false 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "bdev_wait_for_examine" 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "scsi", 00:03:49.363 "config": null 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "scheduler", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "framework_set_scheduler", 00:03:49.363 "params": { 00:03:49.363 "name": "static" 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "vhost_scsi", 00:03:49.363 "config": [] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "vhost_blk", 00:03:49.363 "config": [] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "ublk", 00:03:49.363 "config": [] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "nbd", 00:03:49.363 "config": [] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "nvmf", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "nvmf_set_config", 00:03:49.363 "params": { 00:03:49.363 "discovery_filter": "match_any", 00:03:49.363 "admin_cmd_passthru": { 00:03:49.363 "identify_ctrlr": false 00:03:49.363 } 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "nvmf_set_max_subsystems", 00:03:49.363 "params": { 00:03:49.363 "max_subsystems": 1024 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "nvmf_set_crdt", 00:03:49.363 "params": { 00:03:49.363 "crdt1": 0, 00:03:49.363 "crdt2": 0, 00:03:49.363 "crdt3": 0 00:03:49.363 } 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "method": "nvmf_create_transport", 00:03:49.363 "params": { 00:03:49.363 "trtype": "TCP", 00:03:49.363 "max_queue_depth": 128, 00:03:49.363 "max_io_qpairs_per_ctrlr": 127, 00:03:49.363 "in_capsule_data_size": 4096, 00:03:49.363 "max_io_size": 131072, 00:03:49.363 "io_unit_size": 131072, 00:03:49.363 "max_aq_depth": 128, 00:03:49.363 "num_shared_buffers": 511, 00:03:49.363 "buf_cache_size": 4294967295, 00:03:49.363 "dif_insert_or_strip": false, 00:03:49.363 "zcopy": false, 00:03:49.363 "c2h_success": true, 00:03:49.363 "sock_priority": 0, 00:03:49.363 "abort_timeout_sec": 1, 00:03:49.363 "ack_timeout": 0, 00:03:49.363 "data_wr_pool_size": 0 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 }, 00:03:49.363 { 00:03:49.363 "subsystem": "iscsi", 00:03:49.363 "config": [ 00:03:49.363 { 00:03:49.363 "method": "iscsi_set_options", 00:03:49.363 "params": { 00:03:49.363 "node_base": "iqn.2016-06.io.spdk", 00:03:49.363 "max_sessions": 128, 00:03:49.363 "max_connections_per_session": 2, 00:03:49.363 "max_queue_depth": 64, 00:03:49.363 "default_time2wait": 2, 00:03:49.363 "default_time2retain": 20, 00:03:49.363 "first_burst_length": 8192, 00:03:49.363 "immediate_data": true, 00:03:49.363 "allow_duplicated_isid": false, 00:03:49.363 "error_recovery_level": 0, 00:03:49.363 "nop_timeout": 60, 00:03:49.363 "nop_in_interval": 30, 00:03:49.363 "disable_chap": false, 00:03:49.363 "require_chap": false, 00:03:49.363 "mutual_chap": false, 00:03:49.363 "chap_group": 0, 00:03:49.363 "max_large_datain_per_connection": 64, 00:03:49.363 "max_r2t_per_connection": 4, 00:03:49.363 "pdu_pool_size": 36864, 00:03:49.363 "immediate_data_pool_size": 16384, 00:03:49.363 "data_out_pool_size": 2048 00:03:49.363 } 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 } 00:03:49.363 ] 00:03:49.363 } 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1389802 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1389802 ']' 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1389802 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:49.363 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389802 00:03:49.622 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:49.622 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:49.622 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389802' 00:03:49.622 killing process with pid 1389802 00:03:49.622 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1389802 00:03:49.622 16:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1389802 00:03:49.882 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1389944 00:03:49.882 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.882 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:55.157 16:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1389944 00:03:55.157 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1389944 ']' 00:03:55.157 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1389944 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389944 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389944' 00:03:55.158 killing process with pid 1389944 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1389944 00:03:55.158 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1389944 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.416 00:03:55.416 real 0m6.625s 00:03:55.416 user 0m6.208s 00:03:55.416 sys 0m0.700s 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.416 ************************************ 00:03:55.416 END TEST skip_rpc_with_json 00:03:55.416 ************************************ 00:03:55.416 16:20:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:55.416 16:20:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:55.416 16:20:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.416 16:20:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.416 16:20:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.416 ************************************ 00:03:55.416 START TEST skip_rpc_with_delay 00:03:55.416 ************************************ 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.416 16:20:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.675 [2024-07-15 16:20:35.015570] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:55.675 [2024-07-15 16:20:35.015701] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:55.675 00:03:55.675 real 0m0.070s 00:03:55.675 user 0m0.045s 00:03:55.675 sys 0m0.025s 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.675 16:20:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:55.675 ************************************ 00:03:55.675 END TEST skip_rpc_with_delay 00:03:55.675 ************************************ 00:03:55.675 16:20:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:55.675 16:20:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:55.675 16:20:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:55.675 16:20:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:55.676 16:20:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.676 16:20:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.676 16:20:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.676 ************************************ 00:03:55.676 START TEST exit_on_failed_rpc_init 00:03:55.676 ************************************ 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1390662 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1390662 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1390662 ']' 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.676 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.676 [2024-07-15 16:20:35.130335] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:55.676 [2024-07-15 16:20:35.130429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390662 ] 00:03:55.676 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.676 [2024-07-15 16:20:35.190694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.935 [2024-07-15 16:20:35.301203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.194 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.194 [2024-07-15 16:20:35.612400] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:56.195 [2024-07-15 16:20:35.612489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390667 ] 00:03:56.195 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.195 [2024-07-15 16:20:35.674333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.195 [2024-07-15 16:20:35.791373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:56.195 [2024-07-15 16:20:35.791495] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:56.195 [2024-07-15 16:20:35.791523] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:56.195 [2024-07-15 16:20:35.791538] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1390662 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1390662 ']' 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1390662 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1390662 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1390662' 00:03:56.452 killing process with pid 1390662 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1390662 00:03:56.452 16:20:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1390662 00:03:57.019 00:03:57.019 real 0m1.332s 00:03:57.019 user 0m1.496s 00:03:57.019 sys 0m0.459s 00:03:57.019 16:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.019 16:20:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 ************************************ 00:03:57.019 END TEST exit_on_failed_rpc_init 00:03:57.019 ************************************ 00:03:57.019 16:20:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.019 16:20:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.019 00:03:57.019 real 0m13.769s 00:03:57.019 user 0m13.022s 00:03:57.019 sys 0m1.676s 00:03:57.019 16:20:36 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.019 16:20:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 ************************************ 00:03:57.019 END TEST skip_rpc 00:03:57.019 ************************************ 00:03:57.019 16:20:36 -- common/autotest_common.sh@1142 -- # return 0 00:03:57.019 16:20:36 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.019 16:20:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.019 16:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.019 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 ************************************ 00:03:57.019 START TEST rpc_client 00:03:57.019 ************************************ 00:03:57.019 16:20:36 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.019 * Looking for test storage... 00:03:57.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:57.019 16:20:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:57.019 OK 00:03:57.019 16:20:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:57.019 00:03:57.019 real 0m0.071s 00:03:57.019 user 0m0.030s 00:03:57.019 sys 0m0.046s 00:03:57.019 16:20:36 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.019 16:20:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 ************************************ 00:03:57.019 END TEST rpc_client 00:03:57.019 ************************************ 00:03:57.019 16:20:36 -- common/autotest_common.sh@1142 -- # return 0 00:03:57.019 16:20:36 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.020 16:20:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.020 16:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.020 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.020 ************************************ 00:03:57.020 START TEST json_config 00:03:57.020 ************************************ 00:03:57.020 16:20:36 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.278 16:20:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.278 16:20:36 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.278 16:20:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.278 16:20:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.278 16:20:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.278 16:20:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.278 16:20:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.278 16:20:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.278 16:20:36 json_config -- paths/export.sh@5 -- # export PATH 00:03:57.279 16:20:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@47 -- # : 0 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.279 16:20:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:57.279 INFO: JSON configuration test init 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.279 16:20:36 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:57.279 16:20:36 json_config -- json_config/common.sh@9 -- # local app=target 00:03:57.279 16:20:36 json_config -- json_config/common.sh@10 -- # shift 00:03:57.279 16:20:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.279 16:20:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.279 16:20:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.279 16:20:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.279 16:20:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.279 16:20:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1390909 00:03:57.279 16:20:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:57.279 16:20:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.279 Waiting for target to run... 00:03:57.279 16:20:36 json_config -- json_config/common.sh@25 -- # waitforlisten 1390909 /var/tmp/spdk_tgt.sock 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@829 -- # '[' -z 1390909 ']' 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.279 16:20:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.279 [2024-07-15 16:20:36.710015] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:03:57.279 [2024-07-15 16:20:36.710116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390909 ] 00:03:57.279 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.568 [2024-07-15 16:20:37.058050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.828 [2024-07-15 16:20:37.148706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:58.086 16:20:37 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.086 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:58.086 16:20:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:58.086 16:20:37 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:58.086 16:20:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:01.373 16:20:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.373 16:20:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:01.373 16:20:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:01.373 16:20:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:01.630 16:20:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.630 16:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:01.630 16:20:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.630 16:20:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:01.630 16:20:41 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:01.630 16:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:01.886 MallocForNvmf0 00:04:01.886 16:20:41 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.886 16:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:02.144 MallocForNvmf1 00:04:02.144 16:20:41 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:02.144 16:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:02.400 [2024-07-15 16:20:41.845194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.400 16:20:41 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.400 16:20:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.658 16:20:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:02.658 16:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:02.915 16:20:42 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:02.915 16:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.175 16:20:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.175 16:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.436 [2024-07-15 16:20:42.816367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.436 16:20:42 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:03.436 16:20:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.436 16:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.436 16:20:42 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:03.436 16:20:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.436 16:20:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.436 16:20:42 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:03.436 16:20:42 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.436 16:20:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.693 MallocBdevForConfigChangeCheck 00:04:03.693 16:20:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:03.693 16:20:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.693 16:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.693 16:20:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:03.693 16:20:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.950 16:20:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:03.950 INFO: shutting down applications... 00:04:03.950 16:20:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:03.950 16:20:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:03.950 16:20:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:03.950 16:20:43 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:05.852 Calling clear_iscsi_subsystem 00:04:05.852 Calling clear_nvmf_subsystem 00:04:05.852 Calling clear_nbd_subsystem 00:04:05.852 Calling clear_ublk_subsystem 00:04:05.852 Calling clear_vhost_blk_subsystem 00:04:05.852 Calling clear_vhost_scsi_subsystem 00:04:05.852 Calling clear_bdev_subsystem 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:05.852 16:20:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:06.111 16:20:45 json_config -- json_config/json_config.sh@345 -- # break 00:04:06.111 16:20:45 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:06.111 16:20:45 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:06.111 16:20:45 json_config -- json_config/common.sh@31 -- # local app=target 00:04:06.111 16:20:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.111 16:20:45 json_config -- json_config/common.sh@35 -- # [[ -n 1390909 ]] 00:04:06.111 16:20:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1390909 00:04:06.111 16:20:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.111 16:20:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.111 16:20:45 json_config -- json_config/common.sh@41 -- # kill -0 1390909 00:04:06.111 16:20:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.677 16:20:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.677 16:20:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.677 16:20:46 json_config -- json_config/common.sh@41 -- # kill -0 1390909 00:04:06.677 16:20:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.677 16:20:46 json_config -- json_config/common.sh@43 -- # break 00:04:06.677 16:20:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.677 16:20:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.677 SPDK target shutdown done 00:04:06.677 16:20:46 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:06.677 INFO: relaunching applications... 00:04:06.677 16:20:46 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.677 16:20:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.677 16:20:46 json_config -- json_config/common.sh@10 -- # shift 00:04:06.677 16:20:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.677 16:20:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.677 16:20:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.677 16:20:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.677 16:20:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.677 16:20:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1392151 00:04:06.677 16:20:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.677 16:20:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.677 Waiting for target to run... 00:04:06.677 16:20:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1392151 /var/tmp/spdk_tgt.sock 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 1392151 ']' 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.677 16:20:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.677 [2024-07-15 16:20:46.122002] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:06.678 [2024-07-15 16:20:46.122089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392151 ] 00:04:06.678 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.242 [2024-07-15 16:20:46.622036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.242 [2024-07-15 16:20:46.725664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.528 [2024-07-15 16:20:49.768710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.528 [2024-07-15 16:20:49.801193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.092 16:20:50 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.092 16:20:50 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:11.092 16:20:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.092 00:04:11.092 16:20:50 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:11.092 16:20:50 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:11.092 INFO: Checking if target configuration is the same... 00:04:11.092 16:20:50 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.092 16:20:50 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:11.092 16:20:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.092 + '[' 2 -ne 2 ']' 00:04:11.092 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.092 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.092 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.092 +++ basename /dev/fd/62 00:04:11.092 ++ mktemp /tmp/62.XXX 00:04:11.092 + tmp_file_1=/tmp/62.dbm 00:04:11.092 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.092 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.092 + tmp_file_2=/tmp/spdk_tgt_config.json.5TP 00:04:11.092 + ret=0 00:04:11.092 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.348 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.606 + diff -u /tmp/62.dbm /tmp/spdk_tgt_config.json.5TP 00:04:11.606 + echo 'INFO: JSON config files are the same' 00:04:11.606 INFO: JSON config files are the same 00:04:11.606 + rm /tmp/62.dbm /tmp/spdk_tgt_config.json.5TP 00:04:11.606 + exit 0 00:04:11.606 16:20:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:11.606 16:20:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.606 INFO: changing configuration and checking if this can be detected... 00:04:11.606 16:20:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.606 16:20:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.866 16:20:51 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.866 16:20:51 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:11.866 16:20:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.866 + '[' 2 -ne 2 ']' 00:04:11.866 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.866 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.866 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.866 +++ basename /dev/fd/62 00:04:11.866 ++ mktemp /tmp/62.XXX 00:04:11.866 + tmp_file_1=/tmp/62.7n2 00:04:11.866 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.866 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.866 + tmp_file_2=/tmp/spdk_tgt_config.json.T5d 00:04:11.866 + ret=0 00:04:11.866 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.124 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.124 + diff -u /tmp/62.7n2 /tmp/spdk_tgt_config.json.T5d 00:04:12.124 + ret=1 00:04:12.124 + echo '=== Start of file: /tmp/62.7n2 ===' 00:04:12.124 + cat /tmp/62.7n2 00:04:12.124 + echo '=== End of file: /tmp/62.7n2 ===' 00:04:12.124 + echo '' 00:04:12.124 + echo '=== Start of file: /tmp/spdk_tgt_config.json.T5d ===' 00:04:12.124 + cat /tmp/spdk_tgt_config.json.T5d 00:04:12.124 + echo '=== End of file: /tmp/spdk_tgt_config.json.T5d ===' 00:04:12.124 + echo '' 00:04:12.124 + rm /tmp/62.7n2 /tmp/spdk_tgt_config.json.T5d 00:04:12.124 + exit 1 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:12.124 INFO: configuration change detected. 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@317 -- # [[ -n 1392151 ]] 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.124 16:20:51 json_config -- json_config/json_config.sh@323 -- # killprocess 1392151 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@948 -- # '[' -z 1392151 ']' 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@952 -- # kill -0 1392151 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@953 -- # uname 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1392151 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1392151' 00:04:12.124 killing process with pid 1392151 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@967 -- # kill 1392151 00:04:12.124 16:20:51 json_config -- common/autotest_common.sh@972 -- # wait 1392151 00:04:14.027 16:20:53 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.027 16:20:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:14.027 16:20:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.027 16:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.027 16:20:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:14.027 16:20:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:14.027 INFO: Success 00:04:14.027 00:04:14.027 real 0m16.768s 00:04:14.027 user 0m18.737s 00:04:14.027 sys 0m2.040s 00:04:14.027 16:20:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.027 16:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.027 ************************************ 00:04:14.027 END TEST json_config 00:04:14.027 ************************************ 00:04:14.027 16:20:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.027 16:20:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.027 16:20:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.027 16:20:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.027 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:04:14.027 ************************************ 00:04:14.027 START TEST json_config_extra_key 00:04:14.027 ************************************ 00:04:14.027 16:20:53 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.027 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.027 16:20:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.027 16:20:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.027 16:20:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.027 16:20:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.027 16:20:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.027 16:20:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.027 16:20:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.027 16:20:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:14.028 16:20:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:14.028 16:20:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.028 INFO: launching applications... 00:04:14.028 16:20:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1393151 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.028 Waiting for target to run... 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.028 16:20:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1393151 /var/tmp/spdk_tgt.sock 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1393151 ']' 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.028 16:20:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.028 [2024-07-15 16:20:53.516974] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:14.028 [2024-07-15 16:20:53.517054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393151 ] 00:04:14.028 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.631 [2024-07-15 16:20:54.017262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.631 [2024-07-15 16:20:54.119027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.888 16:20:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.888 16:20:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:14.888 00:04:14.888 16:20:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:14.888 INFO: shutting down applications... 00:04:14.888 16:20:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1393151 ]] 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1393151 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1393151 00:04:14.888 16:20:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.455 16:20:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.455 16:20:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.455 16:20:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1393151 00:04:15.455 16:20:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1393151 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.022 16:20:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.022 SPDK target shutdown done 00:04:16.022 16:20:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:16.022 Success 00:04:16.022 00:04:16.022 real 0m2.056s 00:04:16.022 user 0m1.426s 00:04:16.022 sys 0m0.597s 00:04:16.022 16:20:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.022 16:20:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:16.022 ************************************ 00:04:16.022 END TEST json_config_extra_key 00:04:16.022 ************************************ 00:04:16.022 16:20:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:16.022 16:20:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.022 16:20:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.022 16:20:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.022 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.022 ************************************ 00:04:16.022 START TEST alias_rpc 00:04:16.022 ************************************ 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.022 * Looking for test storage... 00:04:16.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:16.022 16:20:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.022 16:20:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1393467 00:04:16.022 16:20:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.022 16:20:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1393467 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1393467 ']' 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:16.022 16:20:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.022 [2024-07-15 16:20:55.613089] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:16.022 [2024-07-15 16:20:55.613189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393467 ] 00:04:16.281 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.281 [2024-07-15 16:20:55.671466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.281 [2024-07-15 16:20:55.776524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:17.218 16:20:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:17.218 16:20:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1393467 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1393467 ']' 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1393467 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:17.218 16:20:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393467 00:04:17.478 16:20:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:17.478 16:20:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:17.478 16:20:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393467' 00:04:17.478 killing process with pid 1393467 00:04:17.478 16:20:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 1393467 00:04:17.478 16:20:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 1393467 00:04:17.736 00:04:17.736 real 0m1.773s 00:04:17.736 user 0m2.035s 00:04:17.736 sys 0m0.444s 00:04:17.736 16:20:57 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.736 16:20:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.736 ************************************ 00:04:17.736 END TEST alias_rpc 00:04:17.736 ************************************ 00:04:17.736 16:20:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.736 16:20:57 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:17.736 16:20:57 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.736 16:20:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.736 16:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.736 16:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.994 ************************************ 00:04:17.994 START TEST spdkcli_tcp 00:04:17.994 ************************************ 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.994 * Looking for test storage... 00:04:17.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1393778 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.994 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1393778 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1393778 ']' 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.994 16:20:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.994 [2024-07-15 16:20:57.443020] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:17.994 [2024-07-15 16:20:57.443101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393778 ] 00:04:17.994 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.994 [2024-07-15 16:20:57.499903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.253 [2024-07-15 16:20:57.609009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.253 [2024-07-15 16:20:57.609014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.513 16:20:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.513 16:20:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:18.513 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1393792 00:04:18.513 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.513 16:20:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.513 [ 00:04:18.513 "bdev_malloc_delete", 00:04:18.513 "bdev_malloc_create", 00:04:18.513 "bdev_null_resize", 00:04:18.513 "bdev_null_delete", 00:04:18.513 "bdev_null_create", 00:04:18.513 "bdev_nvme_cuse_unregister", 00:04:18.513 "bdev_nvme_cuse_register", 00:04:18.513 "bdev_opal_new_user", 00:04:18.513 "bdev_opal_set_lock_state", 00:04:18.513 "bdev_opal_delete", 00:04:18.513 "bdev_opal_get_info", 00:04:18.513 "bdev_opal_create", 00:04:18.513 "bdev_nvme_opal_revert", 00:04:18.513 "bdev_nvme_opal_init", 00:04:18.513 "bdev_nvme_send_cmd", 00:04:18.513 "bdev_nvme_get_path_iostat", 00:04:18.513 "bdev_nvme_get_mdns_discovery_info", 00:04:18.513 "bdev_nvme_stop_mdns_discovery", 00:04:18.513 "bdev_nvme_start_mdns_discovery", 00:04:18.513 "bdev_nvme_set_multipath_policy", 00:04:18.513 "bdev_nvme_set_preferred_path", 00:04:18.513 "bdev_nvme_get_io_paths", 00:04:18.513 "bdev_nvme_remove_error_injection", 00:04:18.513 "bdev_nvme_add_error_injection", 00:04:18.513 "bdev_nvme_get_discovery_info", 00:04:18.513 "bdev_nvme_stop_discovery", 00:04:18.513 "bdev_nvme_start_discovery", 00:04:18.513 "bdev_nvme_get_controller_health_info", 00:04:18.513 "bdev_nvme_disable_controller", 00:04:18.513 "bdev_nvme_enable_controller", 00:04:18.513 "bdev_nvme_reset_controller", 00:04:18.513 "bdev_nvme_get_transport_statistics", 00:04:18.513 "bdev_nvme_apply_firmware", 00:04:18.513 "bdev_nvme_detach_controller", 00:04:18.513 "bdev_nvme_get_controllers", 00:04:18.513 "bdev_nvme_attach_controller", 00:04:18.513 "bdev_nvme_set_hotplug", 00:04:18.513 "bdev_nvme_set_options", 00:04:18.513 "bdev_passthru_delete", 00:04:18.513 "bdev_passthru_create", 00:04:18.513 "bdev_lvol_set_parent_bdev", 00:04:18.513 "bdev_lvol_set_parent", 00:04:18.513 "bdev_lvol_check_shallow_copy", 00:04:18.513 "bdev_lvol_start_shallow_copy", 00:04:18.513 "bdev_lvol_grow_lvstore", 00:04:18.513 "bdev_lvol_get_lvols", 00:04:18.513 "bdev_lvol_get_lvstores", 00:04:18.513 "bdev_lvol_delete", 00:04:18.513 "bdev_lvol_set_read_only", 00:04:18.513 "bdev_lvol_resize", 00:04:18.513 "bdev_lvol_decouple_parent", 00:04:18.513 "bdev_lvol_inflate", 00:04:18.513 "bdev_lvol_rename", 00:04:18.513 "bdev_lvol_clone_bdev", 00:04:18.513 "bdev_lvol_clone", 00:04:18.513 "bdev_lvol_snapshot", 00:04:18.513 "bdev_lvol_create", 00:04:18.513 "bdev_lvol_delete_lvstore", 00:04:18.513 "bdev_lvol_rename_lvstore", 00:04:18.513 "bdev_lvol_create_lvstore", 00:04:18.513 "bdev_raid_set_options", 00:04:18.513 "bdev_raid_remove_base_bdev", 00:04:18.513 "bdev_raid_add_base_bdev", 00:04:18.513 "bdev_raid_delete", 00:04:18.513 "bdev_raid_create", 00:04:18.513 "bdev_raid_get_bdevs", 00:04:18.513 "bdev_error_inject_error", 00:04:18.513 "bdev_error_delete", 00:04:18.513 "bdev_error_create", 00:04:18.513 "bdev_split_delete", 00:04:18.513 "bdev_split_create", 00:04:18.513 "bdev_delay_delete", 00:04:18.513 "bdev_delay_create", 00:04:18.513 "bdev_delay_update_latency", 00:04:18.513 "bdev_zone_block_delete", 00:04:18.513 "bdev_zone_block_create", 00:04:18.513 "blobfs_create", 00:04:18.513 "blobfs_detect", 00:04:18.513 "blobfs_set_cache_size", 00:04:18.513 "bdev_aio_delete", 00:04:18.513 "bdev_aio_rescan", 00:04:18.513 "bdev_aio_create", 00:04:18.513 "bdev_ftl_set_property", 00:04:18.513 "bdev_ftl_get_properties", 00:04:18.513 "bdev_ftl_get_stats", 00:04:18.513 "bdev_ftl_unmap", 00:04:18.513 "bdev_ftl_unload", 00:04:18.513 "bdev_ftl_delete", 00:04:18.513 "bdev_ftl_load", 00:04:18.513 "bdev_ftl_create", 00:04:18.513 "bdev_virtio_attach_controller", 00:04:18.513 "bdev_virtio_scsi_get_devices", 00:04:18.513 "bdev_virtio_detach_controller", 00:04:18.513 "bdev_virtio_blk_set_hotplug", 00:04:18.513 "bdev_iscsi_delete", 00:04:18.513 "bdev_iscsi_create", 00:04:18.513 "bdev_iscsi_set_options", 00:04:18.513 "accel_error_inject_error", 00:04:18.513 "ioat_scan_accel_module", 00:04:18.513 "dsa_scan_accel_module", 00:04:18.513 "iaa_scan_accel_module", 00:04:18.513 "vfu_virtio_create_scsi_endpoint", 00:04:18.513 "vfu_virtio_scsi_remove_target", 00:04:18.513 "vfu_virtio_scsi_add_target", 00:04:18.513 "vfu_virtio_create_blk_endpoint", 00:04:18.513 "vfu_virtio_delete_endpoint", 00:04:18.513 "keyring_file_remove_key", 00:04:18.513 "keyring_file_add_key", 00:04:18.513 "keyring_linux_set_options", 00:04:18.513 "iscsi_get_histogram", 00:04:18.513 "iscsi_enable_histogram", 00:04:18.513 "iscsi_set_options", 00:04:18.513 "iscsi_get_auth_groups", 00:04:18.513 "iscsi_auth_group_remove_secret", 00:04:18.513 "iscsi_auth_group_add_secret", 00:04:18.513 "iscsi_delete_auth_group", 00:04:18.513 "iscsi_create_auth_group", 00:04:18.513 "iscsi_set_discovery_auth", 00:04:18.513 "iscsi_get_options", 00:04:18.513 "iscsi_target_node_request_logout", 00:04:18.513 "iscsi_target_node_set_redirect", 00:04:18.513 "iscsi_target_node_set_auth", 00:04:18.513 "iscsi_target_node_add_lun", 00:04:18.513 "iscsi_get_stats", 00:04:18.513 "iscsi_get_connections", 00:04:18.513 "iscsi_portal_group_set_auth", 00:04:18.513 "iscsi_start_portal_group", 00:04:18.513 "iscsi_delete_portal_group", 00:04:18.513 "iscsi_create_portal_group", 00:04:18.513 "iscsi_get_portal_groups", 00:04:18.513 "iscsi_delete_target_node", 00:04:18.513 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.513 "iscsi_target_node_add_pg_ig_maps", 00:04:18.513 "iscsi_create_target_node", 00:04:18.513 "iscsi_get_target_nodes", 00:04:18.513 "iscsi_delete_initiator_group", 00:04:18.513 "iscsi_initiator_group_remove_initiators", 00:04:18.513 "iscsi_initiator_group_add_initiators", 00:04:18.513 "iscsi_create_initiator_group", 00:04:18.513 "iscsi_get_initiator_groups", 00:04:18.513 "nvmf_set_crdt", 00:04:18.513 "nvmf_set_config", 00:04:18.513 "nvmf_set_max_subsystems", 00:04:18.513 "nvmf_stop_mdns_prr", 00:04:18.513 "nvmf_publish_mdns_prr", 00:04:18.513 "nvmf_subsystem_get_listeners", 00:04:18.513 "nvmf_subsystem_get_qpairs", 00:04:18.513 "nvmf_subsystem_get_controllers", 00:04:18.514 "nvmf_get_stats", 00:04:18.514 "nvmf_get_transports", 00:04:18.514 "nvmf_create_transport", 00:04:18.514 "nvmf_get_targets", 00:04:18.514 "nvmf_delete_target", 00:04:18.514 "nvmf_create_target", 00:04:18.514 "nvmf_subsystem_allow_any_host", 00:04:18.514 "nvmf_subsystem_remove_host", 00:04:18.514 "nvmf_subsystem_add_host", 00:04:18.514 "nvmf_ns_remove_host", 00:04:18.514 "nvmf_ns_add_host", 00:04:18.514 "nvmf_subsystem_remove_ns", 00:04:18.514 "nvmf_subsystem_add_ns", 00:04:18.514 "nvmf_subsystem_listener_set_ana_state", 00:04:18.514 "nvmf_discovery_get_referrals", 00:04:18.514 "nvmf_discovery_remove_referral", 00:04:18.514 "nvmf_discovery_add_referral", 00:04:18.514 "nvmf_subsystem_remove_listener", 00:04:18.514 "nvmf_subsystem_add_listener", 00:04:18.514 "nvmf_delete_subsystem", 00:04:18.514 "nvmf_create_subsystem", 00:04:18.514 "nvmf_get_subsystems", 00:04:18.514 "env_dpdk_get_mem_stats", 00:04:18.514 "nbd_get_disks", 00:04:18.514 "nbd_stop_disk", 00:04:18.514 "nbd_start_disk", 00:04:18.514 "ublk_recover_disk", 00:04:18.514 "ublk_get_disks", 00:04:18.514 "ublk_stop_disk", 00:04:18.514 "ublk_start_disk", 00:04:18.514 "ublk_destroy_target", 00:04:18.514 "ublk_create_target", 00:04:18.514 "virtio_blk_create_transport", 00:04:18.514 "virtio_blk_get_transports", 00:04:18.514 "vhost_controller_set_coalescing", 00:04:18.514 "vhost_get_controllers", 00:04:18.514 "vhost_delete_controller", 00:04:18.514 "vhost_create_blk_controller", 00:04:18.514 "vhost_scsi_controller_remove_target", 00:04:18.514 "vhost_scsi_controller_add_target", 00:04:18.514 "vhost_start_scsi_controller", 00:04:18.514 "vhost_create_scsi_controller", 00:04:18.514 "thread_set_cpumask", 00:04:18.514 "framework_get_governor", 00:04:18.514 "framework_get_scheduler", 00:04:18.514 "framework_set_scheduler", 00:04:18.514 "framework_get_reactors", 00:04:18.514 "thread_get_io_channels", 00:04:18.514 "thread_get_pollers", 00:04:18.514 "thread_get_stats", 00:04:18.514 "framework_monitor_context_switch", 00:04:18.514 "spdk_kill_instance", 00:04:18.514 "log_enable_timestamps", 00:04:18.514 "log_get_flags", 00:04:18.514 "log_clear_flag", 00:04:18.514 "log_set_flag", 00:04:18.514 "log_get_level", 00:04:18.514 "log_set_level", 00:04:18.514 "log_get_print_level", 00:04:18.514 "log_set_print_level", 00:04:18.514 "framework_enable_cpumask_locks", 00:04:18.514 "framework_disable_cpumask_locks", 00:04:18.514 "framework_wait_init", 00:04:18.514 "framework_start_init", 00:04:18.514 "scsi_get_devices", 00:04:18.514 "bdev_get_histogram", 00:04:18.514 "bdev_enable_histogram", 00:04:18.514 "bdev_set_qos_limit", 00:04:18.514 "bdev_set_qd_sampling_period", 00:04:18.514 "bdev_get_bdevs", 00:04:18.514 "bdev_reset_iostat", 00:04:18.514 "bdev_get_iostat", 00:04:18.514 "bdev_examine", 00:04:18.514 "bdev_wait_for_examine", 00:04:18.514 "bdev_set_options", 00:04:18.514 "notify_get_notifications", 00:04:18.514 "notify_get_types", 00:04:18.514 "accel_get_stats", 00:04:18.514 "accel_set_options", 00:04:18.514 "accel_set_driver", 00:04:18.514 "accel_crypto_key_destroy", 00:04:18.514 "accel_crypto_keys_get", 00:04:18.514 "accel_crypto_key_create", 00:04:18.514 "accel_assign_opc", 00:04:18.514 "accel_get_module_info", 00:04:18.514 "accel_get_opc_assignments", 00:04:18.514 "vmd_rescan", 00:04:18.514 "vmd_remove_device", 00:04:18.514 "vmd_enable", 00:04:18.514 "sock_get_default_impl", 00:04:18.514 "sock_set_default_impl", 00:04:18.514 "sock_impl_set_options", 00:04:18.514 "sock_impl_get_options", 00:04:18.514 "iobuf_get_stats", 00:04:18.514 "iobuf_set_options", 00:04:18.514 "keyring_get_keys", 00:04:18.514 "framework_get_pci_devices", 00:04:18.514 "framework_get_config", 00:04:18.514 "framework_get_subsystems", 00:04:18.514 "vfu_tgt_set_base_path", 00:04:18.514 "trace_get_info", 00:04:18.514 "trace_get_tpoint_group_mask", 00:04:18.514 "trace_disable_tpoint_group", 00:04:18.514 "trace_enable_tpoint_group", 00:04:18.514 "trace_clear_tpoint_mask", 00:04:18.514 "trace_set_tpoint_mask", 00:04:18.514 "spdk_get_version", 00:04:18.514 "rpc_get_methods" 00:04:18.514 ] 00:04:18.773 16:20:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.773 16:20:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.773 16:20:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1393778 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1393778 ']' 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1393778 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393778 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393778' 00:04:18.773 killing process with pid 1393778 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1393778 00:04:18.773 16:20:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1393778 00:04:19.031 00:04:19.031 real 0m1.285s 00:04:19.031 user 0m2.249s 00:04:19.031 sys 0m0.438s 00:04:19.031 16:20:58 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.031 16:20:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 ************************************ 00:04:19.031 END TEST spdkcli_tcp 00:04:19.031 ************************************ 00:04:19.289 16:20:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.289 16:20:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.289 16:20:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.289 16:20:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.289 16:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:19.289 ************************************ 00:04:19.289 START TEST dpdk_mem_utility 00:04:19.289 ************************************ 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.289 * Looking for test storage... 00:04:19.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:19.289 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.289 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1393982 00:04:19.289 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.289 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1393982 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1393982 ']' 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.289 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.289 [2024-07-15 16:20:58.768105] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:19.289 [2024-07-15 16:20:58.768214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393982 ] 00:04:19.289 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.289 [2024-07-15 16:20:58.830605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.548 [2024-07-15 16:20:58.948783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.116 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.116 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:20.116 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:20.116 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:20.116 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.116 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.116 { 00:04:20.116 "filename": "/tmp/spdk_mem_dump.txt" 00:04:20.116 } 00:04:20.116 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.116 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:20.375 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:20.375 1 heaps totaling size 814.000000 MiB 00:04:20.375 size: 814.000000 MiB heap id: 0 00:04:20.375 end heaps---------- 00:04:20.375 8 mempools totaling size 598.116089 MiB 00:04:20.375 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:20.375 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:20.375 size: 84.521057 MiB name: bdev_io_1393982 00:04:20.375 size: 51.011292 MiB name: evtpool_1393982 00:04:20.375 size: 50.003479 MiB name: msgpool_1393982 00:04:20.375 size: 21.763794 MiB name: PDU_Pool 00:04:20.375 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:20.375 size: 0.026123 MiB name: Session_Pool 00:04:20.375 end mempools------- 00:04:20.375 6 memzones totaling size 4.142822 MiB 00:04:20.375 size: 1.000366 MiB name: RG_ring_0_1393982 00:04:20.375 size: 1.000366 MiB name: RG_ring_1_1393982 00:04:20.375 size: 1.000366 MiB name: RG_ring_4_1393982 00:04:20.375 size: 1.000366 MiB name: RG_ring_5_1393982 00:04:20.375 size: 0.125366 MiB name: RG_ring_2_1393982 00:04:20.375 size: 0.015991 MiB name: RG_ring_3_1393982 00:04:20.375 end memzones------- 00:04:20.375 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.375 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:20.375 list of free elements. size: 12.519348 MiB 00:04:20.375 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:20.375 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:20.375 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:20.375 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:20.375 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:20.375 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:20.375 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:20.375 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:20.375 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:20.375 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:20.375 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:20.375 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:20.375 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:20.375 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:20.375 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:20.375 list of standard malloc elements. size: 199.218079 MiB 00:04:20.375 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:20.375 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:20.375 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:20.375 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:20.375 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:20.375 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:20.375 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:20.375 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:20.375 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:20.375 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:20.375 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:20.375 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:20.375 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:20.375 list of memzone associated elements. size: 602.262573 MiB 00:04:20.375 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:20.375 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.375 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:20.375 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:20.375 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:20.375 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1393982_0 00:04:20.375 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:20.375 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1393982_0 00:04:20.375 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:20.375 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1393982_0 00:04:20.375 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:20.375 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:20.375 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:20.375 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.375 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:20.375 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1393982 00:04:20.375 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:20.375 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1393982 00:04:20.375 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:20.375 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1393982 00:04:20.375 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:20.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.375 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:20.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.375 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:20.375 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.375 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:20.375 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.375 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:20.375 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1393982 00:04:20.375 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:20.375 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1393982 00:04:20.375 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:20.375 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1393982 00:04:20.375 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:20.375 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1393982 00:04:20.375 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:20.375 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1393982 00:04:20.375 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:20.375 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.375 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:20.376 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.376 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:20.376 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.376 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:20.376 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1393982 00:04:20.376 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:20.376 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.376 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:20.376 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.376 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:20.376 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1393982 00:04:20.376 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:20.376 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.376 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:20.376 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1393982 00:04:20.376 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:20.376 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1393982 00:04:20.376 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:20.376 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.376 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.376 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1393982 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1393982 ']' 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1393982 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393982 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393982' 00:04:20.376 killing process with pid 1393982 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1393982 00:04:20.376 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1393982 00:04:20.941 00:04:20.941 real 0m1.624s 00:04:20.941 user 0m1.787s 00:04:20.941 sys 0m0.439s 00:04:20.941 16:21:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.941 16:21:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 ************************************ 00:04:20.941 END TEST dpdk_mem_utility 00:04:20.941 ************************************ 00:04:20.941 16:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.941 16:21:00 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.941 16:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.941 16:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.941 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 ************************************ 00:04:20.941 START TEST event 00:04:20.941 ************************************ 00:04:20.941 16:21:00 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.941 * Looking for test storage... 00:04:20.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.941 16:21:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:20.941 16:21:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.941 16:21:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.941 16:21:00 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:20.941 16:21:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.941 16:21:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.941 ************************************ 00:04:20.941 START TEST event_perf 00:04:20.941 ************************************ 00:04:20.941 16:21:00 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.941 Running I/O for 1 seconds...[2024-07-15 16:21:00.434746] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:20.941 [2024-07-15 16:21:00.434811] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394218 ] 00:04:20.941 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.941 [2024-07-15 16:21:00.493970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.199 [2024-07-15 16:21:00.618849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.199 [2024-07-15 16:21:00.618907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.199 [2024-07-15 16:21:00.618942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:21.199 [2024-07-15 16:21:00.618946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.572 Running I/O for 1 seconds... 00:04:22.572 lcore 0: 234784 00:04:22.572 lcore 1: 234782 00:04:22.572 lcore 2: 234783 00:04:22.572 lcore 3: 234783 00:04:22.572 done. 00:04:22.572 00:04:22.572 real 0m1.318s 00:04:22.572 user 0m4.230s 00:04:22.572 sys 0m0.081s 00:04:22.572 16:21:01 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.572 16:21:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.572 ************************************ 00:04:22.572 END TEST event_perf 00:04:22.572 ************************************ 00:04:22.572 16:21:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:22.572 16:21:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.572 16:21:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:22.572 16:21:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.572 16:21:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.572 ************************************ 00:04:22.572 START TEST event_reactor 00:04:22.572 ************************************ 00:04:22.572 16:21:01 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.572 [2024-07-15 16:21:01.804260] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:22.572 [2024-07-15 16:21:01.804325] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394450 ] 00:04:22.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.572 [2024-07-15 16:21:01.866190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.572 [2024-07-15 16:21:01.984826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.509 test_start 00:04:23.509 oneshot 00:04:23.509 tick 100 00:04:23.509 tick 100 00:04:23.509 tick 250 00:04:23.509 tick 100 00:04:23.509 tick 100 00:04:23.509 tick 100 00:04:23.509 tick 250 00:04:23.509 tick 500 00:04:23.509 tick 100 00:04:23.509 tick 100 00:04:23.509 tick 250 00:04:23.509 tick 100 00:04:23.509 tick 100 00:04:23.509 test_end 00:04:23.509 00:04:23.509 real 0m1.312s 00:04:23.509 user 0m1.232s 00:04:23.509 sys 0m0.074s 00:04:23.509 16:21:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.509 16:21:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.509 ************************************ 00:04:23.509 END TEST event_reactor 00:04:23.509 ************************************ 00:04:23.766 16:21:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:23.766 16:21:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.766 16:21:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:23.766 16:21:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.766 16:21:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.766 ************************************ 00:04:23.766 START TEST event_reactor_perf 00:04:23.766 ************************************ 00:04:23.766 16:21:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.766 [2024-07-15 16:21:03.170831] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:23.766 [2024-07-15 16:21:03.170906] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394723 ] 00:04:23.766 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.766 [2024-07-15 16:21:03.235937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.766 [2024-07-15 16:21:03.353195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.143 test_start 00:04:25.143 test_end 00:04:25.143 Performance: 356294 events per second 00:04:25.143 00:04:25.143 real 0m1.319s 00:04:25.143 user 0m1.234s 00:04:25.143 sys 0m0.078s 00:04:25.143 16:21:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.143 16:21:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:25.143 ************************************ 00:04:25.143 END TEST event_reactor_perf 00:04:25.143 ************************************ 00:04:25.143 16:21:04 event -- common/autotest_common.sh@1142 -- # return 0 00:04:25.143 16:21:04 event -- event/event.sh@49 -- # uname -s 00:04:25.143 16:21:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:25.143 16:21:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:25.143 16:21:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.143 16:21:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.143 16:21:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.143 ************************************ 00:04:25.143 START TEST event_scheduler 00:04:25.143 ************************************ 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:25.143 * Looking for test storage... 00:04:25.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:25.143 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:25.143 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1394916 00:04:25.143 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:25.143 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.143 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1394916 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1394916 ']' 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.143 16:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.143 [2024-07-15 16:21:04.612408] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:25.143 [2024-07-15 16:21:04.612490] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394916 ] 00:04:25.143 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.143 [2024-07-15 16:21:04.673721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.401 [2024-07-15 16:21:04.789589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.401 [2024-07-15 16:21:04.789675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.401 [2024-07-15 16:21:04.789652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.401 [2024-07-15 16:21:04.789679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:25.401 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.401 [2024-07-15 16:21:04.834497] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:25.401 [2024-07-15 16:21:04.834523] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:25.401 [2024-07-15 16:21:04.834555] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:25.401 [2024-07-15 16:21:04.834566] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:25.401 [2024-07-15 16:21:04.834576] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.401 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.401 [2024-07-15 16:21:04.930340] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.401 16:21:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.401 16:21:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.401 ************************************ 00:04:25.401 START TEST scheduler_create_thread 00:04:25.401 ************************************ 00:04:25.401 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.402 2 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.402 3 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.402 4 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.402 5 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.402 16:21:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 6 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 7 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 8 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 9 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 10 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.659 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.227 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.228 00:04:26.228 real 0m0.591s 00:04:26.228 user 0m0.006s 00:04:26.228 sys 0m0.007s 00:04:26.228 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.228 16:21:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 ************************************ 00:04:26.228 END TEST scheduler_create_thread 00:04:26.228 ************************************ 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:26.228 16:21:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:26.228 16:21:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1394916 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1394916 ']' 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1394916 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1394916 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1394916' 00:04:26.228 killing process with pid 1394916 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1394916 00:04:26.228 16:21:05 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1394916 00:04:26.488 [2024-07-15 16:21:06.030479] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:26.746 00:04:26.746 real 0m1.768s 00:04:26.746 user 0m2.240s 00:04:26.746 sys 0m0.312s 00:04:26.746 16:21:06 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.746 16:21:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.746 ************************************ 00:04:26.746 END TEST event_scheduler 00:04:26.746 ************************************ 00:04:26.746 16:21:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:26.746 16:21:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:26.746 16:21:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:26.746 16:21:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.746 16:21:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.746 16:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.006 ************************************ 00:04:27.006 START TEST app_repeat 00:04:27.006 ************************************ 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1395134 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1395134' 00:04:27.006 Process app_repeat pid: 1395134 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:27.006 spdk_app_start Round 0 00:04:27.006 16:21:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1395134 /var/tmp/spdk-nbd.sock 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1395134 ']' 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.006 16:21:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.006 [2024-07-15 16:21:06.369646] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:27.006 [2024-07-15 16:21:06.369717] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395134 ] 00:04:27.006 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.006 [2024-07-15 16:21:06.428224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.006 [2024-07-15 16:21:06.537338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.006 [2024-07-15 16:21:06.537342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.265 16:21:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.265 16:21:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:27.265 16:21:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.522 Malloc0 00:04:27.522 16:21:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.779 Malloc1 00:04:27.779 16:21:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.779 16:21:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.779 16:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.779 16:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.779 16:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.780 16:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.037 /dev/nbd0 00:04:28.037 16:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.037 16:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.037 1+0 records in 00:04:28.037 1+0 records out 00:04:28.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169342 s, 24.2 MB/s 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:28.037 16:21:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:28.037 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.038 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.038 16:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.333 /dev/nbd1 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.333 1+0 records in 00:04:28.333 1+0 records out 00:04:28.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184087 s, 22.3 MB/s 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:28.333 16:21:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.333 16:21:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.592 { 00:04:28.592 "nbd_device": "/dev/nbd0", 00:04:28.592 "bdev_name": "Malloc0" 00:04:28.592 }, 00:04:28.592 { 00:04:28.592 "nbd_device": "/dev/nbd1", 00:04:28.592 "bdev_name": "Malloc1" 00:04:28.592 } 00:04:28.592 ]' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.592 { 00:04:28.592 "nbd_device": "/dev/nbd0", 00:04:28.592 "bdev_name": "Malloc0" 00:04:28.592 }, 00:04:28.592 { 00:04:28.592 "nbd_device": "/dev/nbd1", 00:04:28.592 "bdev_name": "Malloc1" 00:04:28.592 } 00:04:28.592 ]' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.592 /dev/nbd1' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.592 /dev/nbd1' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.592 256+0 records in 00:04:28.592 256+0 records out 00:04:28.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508509 s, 206 MB/s 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:28.592 256+0 records in 00:04:28.592 256+0 records out 00:04:28.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215613 s, 48.6 MB/s 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:28.592 256+0 records in 00:04:28.592 256+0 records out 00:04:28.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253256 s, 41.4 MB/s 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.592 16:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.158 16:21:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.415 16:21:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.415 16:21:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.415 16:21:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.415 16:21:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.415 16:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.673 16:21:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.673 16:21:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:29.932 16:21:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.191 [2024-07-15 16:21:09.608056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.191 [2024-07-15 16:21:09.722443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.191 [2024-07-15 16:21:09.722444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.191 [2024-07-15 16:21:09.780174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.191 [2024-07-15 16:21:09.780257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.482 16:21:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.482 16:21:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:33.482 spdk_app_start Round 1 00:04:33.482 16:21:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1395134 /var/tmp/spdk-nbd.sock 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1395134 ']' 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.482 16:21:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:33.482 16:21:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.482 Malloc0 00:04:33.482 16:21:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.482 Malloc1 00:04:33.747 16:21:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.747 16:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.747 /dev/nbd0 00:04:34.008 16:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.008 16:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.008 1+0 records in 00:04:34.008 1+0 records out 00:04:34.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165611 s, 24.7 MB/s 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:34.008 16:21:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:34.008 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.008 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.008 16:21:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.265 /dev/nbd1 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.265 1+0 records in 00:04:34.265 1+0 records out 00:04:34.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205166 s, 20.0 MB/s 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:34.265 16:21:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.265 16:21:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.266 16:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.524 { 00:04:34.524 "nbd_device": "/dev/nbd0", 00:04:34.524 "bdev_name": "Malloc0" 00:04:34.524 }, 00:04:34.524 { 00:04:34.524 "nbd_device": "/dev/nbd1", 00:04:34.524 "bdev_name": "Malloc1" 00:04:34.524 } 00:04:34.524 ]' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.524 { 00:04:34.524 "nbd_device": "/dev/nbd0", 00:04:34.524 "bdev_name": "Malloc0" 00:04:34.524 }, 00:04:34.524 { 00:04:34.524 "nbd_device": "/dev/nbd1", 00:04:34.524 "bdev_name": "Malloc1" 00:04:34.524 } 00:04:34.524 ]' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.524 /dev/nbd1' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.524 /dev/nbd1' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.524 256+0 records in 00:04:34.524 256+0 records out 00:04:34.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498072 s, 211 MB/s 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.524 256+0 records in 00:04:34.524 256+0 records out 00:04:34.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243265 s, 43.1 MB/s 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.524 16:21:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.524 256+0 records in 00:04:34.524 256+0 records out 00:04:34.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225194 s, 46.6 MB/s 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.524 16:21:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.784 16:21:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.042 16:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.300 16:21:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.300 16:21:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.560 16:21:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.819 [2024-07-15 16:21:15.386912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.076 [2024-07-15 16:21:15.502140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.076 [2024-07-15 16:21:15.502146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.076 [2024-07-15 16:21:15.564387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.076 [2024-07-15 16:21:15.564464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.608 16:21:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.608 16:21:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:38.608 spdk_app_start Round 2 00:04:38.608 16:21:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1395134 /var/tmp/spdk-nbd.sock 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1395134 ']' 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.608 16:21:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:21:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.865 16:21:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:38.865 16:21:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.123 Malloc0 00:04:39.123 16:21:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.380 Malloc1 00:04:39.380 16:21:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.380 16:21:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.637 /dev/nbd0 00:04:39.637 16:21:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.637 16:21:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.637 1+0 records in 00:04:39.637 1+0 records out 00:04:39.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137771 s, 29.7 MB/s 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.637 16:21:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.637 16:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.637 16:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.637 16:21:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.895 /dev/nbd1 00:04:39.895 16:21:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.895 16:21:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.895 16:21:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.896 1+0 records in 00:04:39.896 1+0 records out 00:04:39.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195588 s, 20.9 MB/s 00:04:39.896 16:21:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.896 16:21:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.896 16:21:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.896 16:21:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.896 16:21:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.896 16:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.896 16:21:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.896 16:21:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.896 16:21:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.896 16:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.154 16:21:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.154 { 00:04:40.154 "nbd_device": "/dev/nbd0", 00:04:40.154 "bdev_name": "Malloc0" 00:04:40.154 }, 00:04:40.154 { 00:04:40.154 "nbd_device": "/dev/nbd1", 00:04:40.154 "bdev_name": "Malloc1" 00:04:40.154 } 00:04:40.154 ]' 00:04:40.154 16:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.154 { 00:04:40.154 "nbd_device": "/dev/nbd0", 00:04:40.154 "bdev_name": "Malloc0" 00:04:40.154 }, 00:04:40.154 { 00:04:40.154 "nbd_device": "/dev/nbd1", 00:04:40.154 "bdev_name": "Malloc1" 00:04:40.154 } 00:04:40.154 ]' 00:04:40.154 16:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.154 16:21:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.154 /dev/nbd1' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.413 /dev/nbd1' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.413 256+0 records in 00:04:40.413 256+0 records out 00:04:40.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493209 s, 213 MB/s 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.413 256+0 records in 00:04:40.413 256+0 records out 00:04:40.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209912 s, 50.0 MB/s 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.413 256+0 records in 00:04:40.413 256+0 records out 00:04:40.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250873 s, 41.8 MB/s 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.413 16:21:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.672 16:21:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.930 16:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.188 16:21:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.188 16:21:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.445 16:21:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.703 [2024-07-15 16:21:21.208815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.962 [2024-07-15 16:21:21.321912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.962 [2024-07-15 16:21:21.321912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.962 [2024-07-15 16:21:21.382796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.962 [2024-07-15 16:21:21.382872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.522 16:21:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1395134 /var/tmp/spdk-nbd.sock 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1395134 ']' 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.522 16:21:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:44.780 16:21:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1395134 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1395134 ']' 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1395134 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1395134 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1395134' 00:04:44.780 killing process with pid 1395134 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1395134 00:04:44.780 16:21:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1395134 00:04:45.049 spdk_app_start is called in Round 0. 00:04:45.049 Shutdown signal received, stop current app iteration 00:04:45.049 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:04:45.049 spdk_app_start is called in Round 1. 00:04:45.049 Shutdown signal received, stop current app iteration 00:04:45.049 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:04:45.049 spdk_app_start is called in Round 2. 00:04:45.049 Shutdown signal received, stop current app iteration 00:04:45.049 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:04:45.049 spdk_app_start is called in Round 3. 00:04:45.049 Shutdown signal received, stop current app iteration 00:04:45.049 16:21:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:45.049 16:21:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:45.049 00:04:45.049 real 0m18.115s 00:04:45.049 user 0m39.221s 00:04:45.049 sys 0m3.300s 00:04:45.049 16:21:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.049 16:21:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.049 ************************************ 00:04:45.049 END TEST app_repeat 00:04:45.049 ************************************ 00:04:45.049 16:21:24 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.049 16:21:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:45.049 16:21:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:45.049 16:21:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.049 16:21:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.049 16:21:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.049 ************************************ 00:04:45.049 START TEST cpu_locks 00:04:45.049 ************************************ 00:04:45.049 16:21:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:45.049 * Looking for test storage... 00:04:45.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.049 16:21:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:45.049 16:21:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:45.049 16:21:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:45.049 16:21:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:45.049 16:21:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.049 16:21:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.049 16:21:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.049 ************************************ 00:04:45.049 START TEST default_locks 00:04:45.049 ************************************ 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1398088 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1398088 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1398088 ']' 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.049 16:21:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.049 [2024-07-15 16:21:24.629578] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:45.049 [2024-07-15 16:21:24.629674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398088 ] 00:04:45.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.310 [2024-07-15 16:21:24.687204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.310 [2024-07-15 16:21:24.792826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.569 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.569 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:45.569 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1398088 00:04:45.569 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1398088 00:04:45.569 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.826 lslocks: write error 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1398088 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1398088 ']' 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1398088 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398088 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398088' 00:04:45.826 killing process with pid 1398088 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1398088 00:04:45.826 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1398088 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1398088 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1398088 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1398088 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1398088 ']' 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1398088) - No such process 00:04:46.396 ERROR: process (pid: 1398088) is no longer running 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.396 16:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.397 00:04:46.397 real 0m1.246s 00:04:46.397 user 0m1.195s 00:04:46.397 sys 0m0.487s 00:04:46.397 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.397 16:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 ************************************ 00:04:46.397 END TEST default_locks 00:04:46.397 ************************************ 00:04:46.397 16:21:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:46.397 16:21:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:46.397 16:21:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.397 16:21:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.397 16:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 ************************************ 00:04:46.397 START TEST default_locks_via_rpc 00:04:46.397 ************************************ 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1398251 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1398251 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1398251 ']' 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.397 16:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 [2024-07-15 16:21:25.921268] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:46.397 [2024-07-15 16:21:25.921373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398251 ] 00:04:46.397 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.397 [2024-07-15 16:21:25.978979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.657 [2024-07-15 16:21:26.089380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1398251 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1398251 00:04:46.916 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1398251 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1398251 ']' 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1398251 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398251 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398251' 00:04:47.174 killing process with pid 1398251 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1398251 00:04:47.174 16:21:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1398251 00:04:47.740 00:04:47.740 real 0m1.243s 00:04:47.740 user 0m1.196s 00:04:47.740 sys 0m0.507s 00:04:47.740 16:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.740 16:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.740 ************************************ 00:04:47.740 END TEST default_locks_via_rpc 00:04:47.740 ************************************ 00:04:47.740 16:21:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:47.740 16:21:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:47.740 16:21:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.740 16:21:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.740 16:21:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.740 ************************************ 00:04:47.740 START TEST non_locking_app_on_locked_coremask 00:04:47.740 ************************************ 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1398413 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1398413 /var/tmp/spdk.sock 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1398413 ']' 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.740 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.740 [2024-07-15 16:21:27.206239] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:47.740 [2024-07-15 16:21:27.206340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398413 ] 00:04:47.740 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.740 [2024-07-15 16:21:27.265264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.999 [2024-07-15 16:21:27.375870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1398424 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1398424 /var/tmp/spdk2.sock 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1398424 ']' 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.256 16:21:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.256 [2024-07-15 16:21:27.687133] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:48.256 [2024-07-15 16:21:27.687237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398424 ] 00:04:48.256 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.256 [2024-07-15 16:21:27.777262] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.256 [2024-07-15 16:21:27.777306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.513 [2024-07-15 16:21:28.022367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.077 16:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.077 16:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:49.077 16:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1398413 00:04:49.077 16:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1398413 00:04:49.077 16:21:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.641 lslocks: write error 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1398413 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1398413 ']' 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1398413 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398413 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398413' 00:04:49.641 killing process with pid 1398413 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1398413 00:04:49.641 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1398413 00:04:50.572 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1398424 00:04:50.572 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1398424 ']' 00:04:50.572 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1398424 00:04:50.572 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:50.572 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398424 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398424' 00:04:50.573 killing process with pid 1398424 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1398424 00:04:50.573 16:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1398424 00:04:50.832 00:04:50.832 real 0m3.249s 00:04:50.832 user 0m3.397s 00:04:50.832 sys 0m1.035s 00:04:50.832 16:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.832 16:21:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.832 ************************************ 00:04:50.832 END TEST non_locking_app_on_locked_coremask 00:04:50.832 ************************************ 00:04:50.832 16:21:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:51.091 16:21:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:51.091 16:21:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.091 16:21:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.091 16:21:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.091 ************************************ 00:04:51.091 START TEST locking_app_on_unlocked_coremask 00:04:51.091 ************************************ 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1398847 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1398847 /var/tmp/spdk.sock 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1398847 ']' 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.091 16:21:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.091 [2024-07-15 16:21:30.510709] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:51.091 [2024-07-15 16:21:30.510786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398847 ] 00:04:51.091 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.091 [2024-07-15 16:21:30.571944] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.091 [2024-07-15 16:21:30.571981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.091 [2024-07-15 16:21:30.687537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1398983 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1398983 /var/tmp/spdk2.sock 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1398983 ']' 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.025 16:21:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.025 [2024-07-15 16:21:31.497327] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:52.025 [2024-07-15 16:21:31.497410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398983 ] 00:04:52.025 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.025 [2024-07-15 16:21:31.592850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.283 [2024-07-15 16:21:31.831120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.216 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.216 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:53.216 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1398983 00:04:53.216 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1398983 00:04:53.216 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.474 lslocks: write error 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1398847 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1398847 ']' 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1398847 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398847 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398847' 00:04:53.474 killing process with pid 1398847 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1398847 00:04:53.474 16:21:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1398847 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1398983 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1398983 ']' 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1398983 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398983 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398983' 00:04:54.407 killing process with pid 1398983 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1398983 00:04:54.407 16:21:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1398983 00:04:54.975 00:04:54.975 real 0m3.860s 00:04:54.975 user 0m4.203s 00:04:54.975 sys 0m1.091s 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.975 ************************************ 00:04:54.975 END TEST locking_app_on_unlocked_coremask 00:04:54.975 ************************************ 00:04:54.975 16:21:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:54.975 16:21:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:54.975 16:21:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.975 16:21:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.975 16:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.975 ************************************ 00:04:54.975 START TEST locking_app_on_locked_coremask 00:04:54.975 ************************************ 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1399289 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1399289 /var/tmp/spdk.sock 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1399289 ']' 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.975 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.975 [2024-07-15 16:21:34.415406] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:54.975 [2024-07-15 16:21:34.415509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399289 ] 00:04:54.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.976 [2024-07-15 16:21:34.485590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.234 [2024-07-15 16:21:34.605564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.492 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1399423 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1399423 /var/tmp/spdk2.sock 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1399423 /var/tmp/spdk2.sock 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1399423 /var/tmp/spdk2.sock 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1399423 ']' 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.493 16:21:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.493 [2024-07-15 16:21:34.905569] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:55.493 [2024-07-15 16:21:34.905660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399423 ] 00:04:55.493 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.493 [2024-07-15 16:21:34.997980] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1399289 has claimed it. 00:04:55.493 [2024-07-15 16:21:34.998030] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1399423) - No such process 00:04:56.058 ERROR: process (pid: 1399423) is no longer running 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1399289 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1399289 00:04:56.058 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.316 lslocks: write error 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1399289 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1399289 ']' 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1399289 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1399289 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1399289' 00:04:56.316 killing process with pid 1399289 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1399289 00:04:56.316 16:21:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1399289 00:04:56.880 00:04:56.880 real 0m2.001s 00:04:56.880 user 0m2.188s 00:04:56.880 sys 0m0.621s 00:04:56.880 16:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.880 16:21:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.880 ************************************ 00:04:56.880 END TEST locking_app_on_locked_coremask 00:04:56.880 ************************************ 00:04:56.880 16:21:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:56.880 16:21:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.880 16:21:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.880 16:21:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.880 16:21:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.880 ************************************ 00:04:56.880 START TEST locking_overlapped_coremask 00:04:56.880 ************************************ 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1399587 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1399587 /var/tmp/spdk.sock 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1399587 ']' 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.880 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.880 [2024-07-15 16:21:36.464712] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:56.880 [2024-07-15 16:21:36.464818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399587 ] 00:04:57.138 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.138 [2024-07-15 16:21:36.529588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.138 [2024-07-15 16:21:36.653902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.138 [2024-07-15 16:21:36.653948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.138 [2024-07-15 16:21:36.653952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1399715 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1399715 /var/tmp/spdk2.sock 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1399715 /var/tmp/spdk2.sock 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1399715 /var/tmp/spdk2.sock 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1399715 ']' 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.396 16:21:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.396 [2024-07-15 16:21:36.955467] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:57.396 [2024-07-15 16:21:36.955553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399715 ] 00:04:57.396 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.653 [2024-07-15 16:21:37.046360] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1399587 has claimed it. 00:04:57.653 [2024-07-15 16:21:37.046421] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1399715) - No such process 00:04:58.248 ERROR: process (pid: 1399715) is no longer running 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1399587 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1399587 ']' 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1399587 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1399587 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1399587' 00:04:58.248 killing process with pid 1399587 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1399587 00:04:58.248 16:21:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1399587 00:04:58.814 00:04:58.814 real 0m1.704s 00:04:58.814 user 0m4.510s 00:04:58.814 sys 0m0.439s 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.814 ************************************ 00:04:58.814 END TEST locking_overlapped_coremask 00:04:58.814 ************************************ 00:04:58.814 16:21:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:58.814 16:21:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.814 16:21:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.814 16:21:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.814 16:21:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.814 ************************************ 00:04:58.814 START TEST locking_overlapped_coremask_via_rpc 00:04:58.814 ************************************ 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1399888 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1399888 /var/tmp/spdk.sock 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1399888 ']' 00:04:58.814 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.815 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.815 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.815 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.815 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.815 [2024-07-15 16:21:38.217809] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:58.815 [2024-07-15 16:21:38.217913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399888 ] 00:04:58.815 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.815 [2024-07-15 16:21:38.281536] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.815 [2024-07-15 16:21:38.281579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.815 [2024-07-15 16:21:38.403852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.815 [2024-07-15 16:21:38.403902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.815 [2024-07-15 16:21:38.403908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1399894 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1399894 /var/tmp/spdk2.sock 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1399894 ']' 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.073 16:21:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.331 [2024-07-15 16:21:38.718401] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:59.331 [2024-07-15 16:21:38.718496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399894 ] 00:04:59.331 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.331 [2024-07-15 16:21:38.801649] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.331 [2024-07-15 16:21:38.801682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.589 [2024-07-15 16:21:39.025105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.589 [2024-07-15 16:21:39.028935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.589 [2024-07-15 16:21:39.028940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.155 [2024-07-15 16:21:39.656987] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1399888 has claimed it. 00:05:00.155 request: 00:05:00.155 { 00:05:00.155 "method": "framework_enable_cpumask_locks", 00:05:00.155 "req_id": 1 00:05:00.155 } 00:05:00.155 Got JSON-RPC error response 00:05:00.155 response: 00:05:00.155 { 00:05:00.155 "code": -32603, 00:05:00.155 "message": "Failed to claim CPU core: 2" 00:05:00.155 } 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1399888 /var/tmp/spdk.sock 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1399888 ']' 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.155 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1399894 /var/tmp/spdk2.sock 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1399894 ']' 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.413 16:21:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.671 00:05:00.671 real 0m2.009s 00:05:00.671 user 0m1.037s 00:05:00.671 sys 0m0.187s 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.671 16:21:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.671 ************************************ 00:05:00.671 END TEST locking_overlapped_coremask_via_rpc 00:05:00.671 ************************************ 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:00.671 16:21:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:00.671 16:21:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1399888 ]] 00:05:00.671 16:21:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1399888 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1399888 ']' 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1399888 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1399888 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1399888' 00:05:00.671 killing process with pid 1399888 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1399888 00:05:00.671 16:21:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1399888 00:05:01.237 16:21:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1399894 ]] 00:05:01.237 16:21:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1399894 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1399894 ']' 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1399894 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1399894 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1399894' 00:05:01.237 killing process with pid 1399894 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1399894 00:05:01.237 16:21:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1399894 00:05:01.804 16:21:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:01.804 16:21:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:01.804 16:21:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1399888 ]] 00:05:01.804 16:21:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1399888 00:05:01.804 16:21:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1399888 ']' 00:05:01.804 16:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1399888 00:05:01.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1399888) - No such process 00:05:01.804 16:21:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1399888 is not found' 00:05:01.804 Process with pid 1399888 is not found 00:05:01.805 16:21:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1399894 ]] 00:05:01.805 16:21:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1399894 00:05:01.805 16:21:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1399894 ']' 00:05:01.805 16:21:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1399894 00:05:01.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1399894) - No such process 00:05:01.805 16:21:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1399894 is not found' 00:05:01.805 Process with pid 1399894 is not found 00:05:01.805 16:21:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:01.805 00:05:01.805 real 0m16.646s 00:05:01.805 user 0m28.642s 00:05:01.805 sys 0m5.255s 00:05:01.805 16:21:41 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.805 16:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.805 ************************************ 00:05:01.805 END TEST cpu_locks 00:05:01.805 ************************************ 00:05:01.805 16:21:41 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.805 00:05:01.805 real 0m40.838s 00:05:01.805 user 1m16.952s 00:05:01.805 sys 0m9.334s 00:05:01.805 16:21:41 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.805 16:21:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.805 ************************************ 00:05:01.805 END TEST event 00:05:01.805 ************************************ 00:05:01.805 16:21:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.805 16:21:41 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:01.805 16:21:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.805 16:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.805 16:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.805 ************************************ 00:05:01.805 START TEST thread 00:05:01.805 ************************************ 00:05:01.805 16:21:41 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:01.805 * Looking for test storage... 00:05:01.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:01.805 16:21:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.805 16:21:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:01.805 16:21:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.805 16:21:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.805 ************************************ 00:05:01.805 START TEST thread_poller_perf 00:05:01.805 ************************************ 00:05:01.805 16:21:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.805 [2024-07-15 16:21:41.314730] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:01.805 [2024-07-15 16:21:41.314798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400302 ] 00:05:01.805 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.805 [2024-07-15 16:21:41.374379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.063 [2024-07-15 16:21:41.483547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.063 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.463 ====================================== 00:05:03.463 busy:2709307192 (cyc) 00:05:03.463 total_run_count: 294000 00:05:03.463 tsc_hz: 2700000000 (cyc) 00:05:03.463 ====================================== 00:05:03.463 poller_cost: 9215 (cyc), 3412 (nsec) 00:05:03.463 00:05:03.463 real 0m1.313s 00:05:03.463 user 0m1.229s 00:05:03.463 sys 0m0.079s 00:05:03.463 16:21:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.463 16:21:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.463 ************************************ 00:05:03.463 END TEST thread_poller_perf 00:05:03.463 ************************************ 00:05:03.463 16:21:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:03.463 16:21:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.463 16:21:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:03.463 16:21:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.463 16:21:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.463 ************************************ 00:05:03.463 START TEST thread_poller_perf 00:05:03.463 ************************************ 00:05:03.463 16:21:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.463 [2024-07-15 16:21:42.673603] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:03.463 [2024-07-15 16:21:42.673670] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400538 ] 00:05:03.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.463 [2024-07-15 16:21:42.735363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.463 [2024-07-15 16:21:42.852392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.463 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:04.396 ====================================== 00:05:04.396 busy:2702470750 (cyc) 00:05:04.396 total_run_count: 3745000 00:05:04.396 tsc_hz: 2700000000 (cyc) 00:05:04.396 ====================================== 00:05:04.396 poller_cost: 721 (cyc), 267 (nsec) 00:05:04.396 00:05:04.396 real 0m1.312s 00:05:04.396 user 0m1.218s 00:05:04.396 sys 0m0.084s 00:05:04.396 16:21:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.396 16:21:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.396 ************************************ 00:05:04.396 END TEST thread_poller_perf 00:05:04.396 ************************************ 00:05:04.655 16:21:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:04.655 16:21:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:04.655 00:05:04.655 real 0m2.773s 00:05:04.655 user 0m2.511s 00:05:04.655 sys 0m0.258s 00:05:04.655 16:21:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.655 16:21:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.655 ************************************ 00:05:04.655 END TEST thread 00:05:04.655 ************************************ 00:05:04.655 16:21:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.655 16:21:44 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:04.655 16:21:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.655 16:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.655 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.655 ************************************ 00:05:04.655 START TEST accel 00:05:04.655 ************************************ 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:04.655 * Looking for test storage... 00:05:04.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:04.655 16:21:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:04.655 16:21:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:04.655 16:21:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.655 16:21:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1400738 00:05:04.655 16:21:44 accel -- accel/accel.sh@63 -- # waitforlisten 1400738 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@829 -- # '[' -z 1400738 ']' 00:05:04.655 16:21:44 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.655 16:21:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.655 16:21:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.655 16:21:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.655 16:21:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.655 16:21:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.655 16:21:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.655 16:21:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.655 16:21:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:04.655 16:21:44 accel -- accel/accel.sh@41 -- # jq -r . 00:05:04.655 [2024-07-15 16:21:44.159407] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:04.655 [2024-07-15 16:21:44.159506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400738 ] 00:05:04.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.655 [2024-07-15 16:21:44.225531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.913 [2024-07-15 16:21:44.341835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@862 -- # return 0 00:05:05.849 16:21:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:05.849 16:21:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:05.849 16:21:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:05.849 16:21:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:05.849 16:21:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:05.849 16:21:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:05.849 16:21:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:05.849 16:21:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:05.849 16:21:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:05.849 16:21:45 accel -- accel/accel.sh@75 -- # killprocess 1400738 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@948 -- # '[' -z 1400738 ']' 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@952 -- # kill -0 1400738 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@953 -- # uname 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1400738 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1400738' 00:05:05.849 killing process with pid 1400738 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@967 -- # kill 1400738 00:05:05.849 16:21:45 accel -- common/autotest_common.sh@972 -- # wait 1400738 00:05:06.107 16:21:45 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:06.107 16:21:45 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:06.107 16:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:06.107 16:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.107 16:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.107 16:21:45 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:06.107 16:21:45 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:06.108 16:21:45 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.108 16:21:45 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:06.108 16:21:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.108 16:21:45 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:06.108 16:21:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:06.366 16:21:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.366 16:21:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.366 ************************************ 00:05:06.366 START TEST accel_missing_filename 00:05:06.366 ************************************ 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.366 16:21:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:06.366 16:21:45 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:06.366 [2024-07-15 16:21:45.747783] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:06.366 [2024-07-15 16:21:45.747848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400910 ] 00:05:06.366 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.366 [2024-07-15 16:21:45.814256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.366 [2024-07-15 16:21:45.931326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.624 [2024-07-15 16:21:45.993002] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.624 [2024-07-15 16:21:46.075070] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:06.624 A filename is required. 00:05:06.624 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:06.624 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.624 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:06.624 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:06.625 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:06.625 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.625 00:05:06.625 real 0m0.462s 00:05:06.625 user 0m0.356s 00:05:06.625 sys 0m0.135s 00:05:06.625 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.625 16:21:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:06.625 ************************************ 00:05:06.625 END TEST accel_missing_filename 00:05:06.625 ************************************ 00:05:06.625 16:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.625 16:21:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:06.625 16:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:06.625 16:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.625 16:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.884 ************************************ 00:05:06.884 START TEST accel_compress_verify 00:05:06.884 ************************************ 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.884 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:06.884 16:21:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:06.884 [2024-07-15 16:21:46.259815] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:06.884 [2024-07-15 16:21:46.259888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401062 ] 00:05:06.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.884 [2024-07-15 16:21:46.323793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.884 [2024-07-15 16:21:46.441741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.142 [2024-07-15 16:21:46.503502] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.142 [2024-07-15 16:21:46.592126] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:07.142 00:05:07.142 Compression does not support the verify option, aborting. 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.142 00:05:07.142 real 0m0.477s 00:05:07.142 user 0m0.359s 00:05:07.142 sys 0m0.151s 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.142 16:21:46 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:07.142 ************************************ 00:05:07.142 END TEST accel_compress_verify 00:05:07.142 ************************************ 00:05:07.142 16:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.142 16:21:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.401 ************************************ 00:05:07.401 START TEST accel_wrong_workload 00:05:07.401 ************************************ 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:07.401 16:21:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:07.401 Unsupported workload type: foobar 00:05:07.401 [2024-07-15 16:21:46.786273] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:07.401 accel_perf options: 00:05:07.401 [-h help message] 00:05:07.401 [-q queue depth per core] 00:05:07.401 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.401 [-T number of threads per core 00:05:07.401 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.401 [-t time in seconds] 00:05:07.401 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.401 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:07.401 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.401 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.401 [-S for crc32c workload, use this seed value (default 0) 00:05:07.401 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.401 [-f for fill workload, use this BYTE value (default 255) 00:05:07.401 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.401 [-y verify result if this switch is on] 00:05:07.401 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.401 Can be used to spread operations across a wider range of memory. 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.401 00:05:07.401 real 0m0.024s 00:05:07.401 user 0m0.010s 00:05:07.401 sys 0m0.014s 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.401 16:21:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:07.401 ************************************ 00:05:07.401 END TEST accel_wrong_workload 00:05:07.401 ************************************ 00:05:07.401 Error: writing output failed: Broken pipe 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.401 16:21:46 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.401 ************************************ 00:05:07.401 START TEST accel_negative_buffers 00:05:07.401 ************************************ 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:07.401 16:21:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:07.401 -x option must be non-negative. 00:05:07.401 [2024-07-15 16:21:46.848842] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:07.401 accel_perf options: 00:05:07.401 [-h help message] 00:05:07.401 [-q queue depth per core] 00:05:07.401 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.401 [-T number of threads per core 00:05:07.401 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.401 [-t time in seconds] 00:05:07.401 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.401 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:07.401 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.401 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.401 [-S for crc32c workload, use this seed value (default 0) 00:05:07.401 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.401 [-f for fill workload, use this BYTE value (default 255) 00:05:07.401 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.401 [-y verify result if this switch is on] 00:05:07.401 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.401 Can be used to spread operations across a wider range of memory. 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.401 00:05:07.401 real 0m0.020s 00:05:07.401 user 0m0.012s 00:05:07.401 sys 0m0.008s 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.401 16:21:46 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:07.401 ************************************ 00:05:07.401 END TEST accel_negative_buffers 00:05:07.401 ************************************ 00:05:07.401 Error: writing output failed: Broken pipe 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.401 16:21:46 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.401 16:21:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.401 ************************************ 00:05:07.401 START TEST accel_crc32c 00:05:07.401 ************************************ 00:05:07.402 16:21:46 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:07.402 16:21:46 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:07.402 [2024-07-15 16:21:46.919061] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:07.402 [2024-07-15 16:21:46.919124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401127 ] 00:05:07.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.402 [2024-07-15 16:21:46.983994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.660 [2024-07-15 16:21:47.102053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.660 16:21:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:09.034 16:21:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.034 00:05:09.034 real 0m1.474s 00:05:09.034 user 0m1.322s 00:05:09.034 sys 0m0.154s 00:05:09.034 16:21:48 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.034 16:21:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:09.034 ************************************ 00:05:09.034 END TEST accel_crc32c 00:05:09.034 ************************************ 00:05:09.034 16:21:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.034 16:21:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:09.034 16:21:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:09.034 16:21:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.034 16:21:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.034 ************************************ 00:05:09.034 START TEST accel_crc32c_C2 00:05:09.034 ************************************ 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.034 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:09.034 [2024-07-15 16:21:48.433754] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:09.034 [2024-07-15 16:21:48.433817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401403 ] 00:05:09.034 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.034 [2024-07-15 16:21:48.497125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.034 [2024-07-15 16:21:48.615832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.293 16:21:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.668 00:05:10.668 real 0m1.478s 00:05:10.668 user 0m1.337s 00:05:10.668 sys 0m0.144s 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.668 16:21:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:10.668 ************************************ 00:05:10.668 END TEST accel_crc32c_C2 00:05:10.668 ************************************ 00:05:10.668 16:21:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.668 16:21:49 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:10.668 16:21:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.668 16:21:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.668 16:21:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.668 ************************************ 00:05:10.668 START TEST accel_copy 00:05:10.668 ************************************ 00:05:10.668 16:21:49 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:10.668 16:21:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:10.668 [2024-07-15 16:21:49.965659] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:10.668 [2024-07-15 16:21:49.965720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401562 ] 00:05:10.668 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.668 [2024-07-15 16:21:50.030077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.668 [2024-07-15 16:21:50.153497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.668 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.669 16:21:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:12.043 16:21:51 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.043 00:05:12.043 real 0m1.488s 00:05:12.043 user 0m1.344s 00:05:12.043 sys 0m0.145s 00:05:12.043 16:21:51 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.043 16:21:51 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.043 ************************************ 00:05:12.043 END TEST accel_copy 00:05:12.043 ************************************ 00:05:12.043 16:21:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.043 16:21:51 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.043 16:21:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:12.043 16:21:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.043 16:21:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.043 ************************************ 00:05:12.043 START TEST accel_fill 00:05:12.043 ************************************ 00:05:12.043 16:21:51 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:12.043 16:21:51 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:12.043 [2024-07-15 16:21:51.498983] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:12.043 [2024-07-15 16:21:51.499049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401718 ] 00:05:12.043 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.043 [2024-07-15 16:21:51.563220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.301 [2024-07-15 16:21:51.682037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.301 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.302 16:21:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:13.700 16:21:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.700 00:05:13.700 real 0m1.471s 00:05:13.700 user 0m1.327s 00:05:13.700 sys 0m0.146s 00:05:13.700 16:21:52 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.700 16:21:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:13.700 ************************************ 00:05:13.700 END TEST accel_fill 00:05:13.700 ************************************ 00:05:13.700 16:21:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:13.700 16:21:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:13.700 16:21:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:13.700 16:21:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.700 16:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.700 ************************************ 00:05:13.700 START TEST accel_copy_crc32c 00:05:13.700 ************************************ 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:13.700 16:21:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:13.700 [2024-07-15 16:21:53.008503] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:13.700 [2024-07-15 16:21:53.008580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401997 ] 00:05:13.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.700 [2024-07-15 16:21:53.066759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.700 [2024-07-15 16:21:53.185978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.700 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:13.701 16:21:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.076 00:05:15.076 real 0m1.474s 00:05:15.076 user 0m1.339s 00:05:15.076 sys 0m0.138s 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.076 16:21:54 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:15.076 ************************************ 00:05:15.076 END TEST accel_copy_crc32c 00:05:15.076 ************************************ 00:05:15.076 16:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.076 16:21:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.076 16:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:15.076 16:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.076 16:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.076 ************************************ 00:05:15.076 START TEST accel_copy_crc32c_C2 00:05:15.076 ************************************ 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.076 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:15.076 [2024-07-15 16:21:54.530437] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:15.076 [2024-07-15 16:21:54.530491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402152 ] 00:05:15.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.076 [2024-07-15 16:21:54.593291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.335 [2024-07-15 16:21:54.716298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.335 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.336 16:21:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.710 00:05:16.710 real 0m1.488s 00:05:16.710 user 0m1.347s 00:05:16.710 sys 0m0.143s 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.710 16:21:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:16.710 ************************************ 00:05:16.710 END TEST accel_copy_crc32c_C2 00:05:16.710 ************************************ 00:05:16.710 16:21:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.710 16:21:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:16.710 16:21:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:16.710 16:21:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.710 16:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.710 ************************************ 00:05:16.710 START TEST accel_dualcast 00:05:16.710 ************************************ 00:05:16.710 16:21:56 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:16.710 16:21:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:16.710 [2024-07-15 16:21:56.067312] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:16.710 [2024-07-15 16:21:56.067382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402330 ] 00:05:16.710 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.710 [2024-07-15 16:21:56.128650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.710 [2024-07-15 16:21:56.250575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.968 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:16.969 16:21:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:18.341 16:21:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.341 00:05:18.341 real 0m1.484s 00:05:18.341 user 0m1.339s 00:05:18.341 sys 0m0.146s 00:05:18.341 16:21:57 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.341 16:21:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:18.342 ************************************ 00:05:18.342 END TEST accel_dualcast 00:05:18.342 ************************************ 00:05:18.342 16:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.342 16:21:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:18.342 16:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.342 16:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.342 16:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.342 ************************************ 00:05:18.342 START TEST accel_compare 00:05:18.342 ************************************ 00:05:18.342 16:21:57 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:18.342 [2024-07-15 16:21:57.597215] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:18.342 [2024-07-15 16:21:57.597280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402585 ] 00:05:18.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.342 [2024-07-15 16:21:57.655522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.342 [2024-07-15 16:21:57.777431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.342 16:21:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.739 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:19.740 16:21:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.740 00:05:19.740 real 0m1.485s 00:05:19.740 user 0m1.345s 00:05:19.740 sys 0m0.141s 00:05:19.740 16:21:59 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.740 16:21:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:19.740 ************************************ 00:05:19.740 END TEST accel_compare 00:05:19.740 ************************************ 00:05:19.740 16:21:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.740 16:21:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:19.740 16:21:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:19.740 16:21:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.740 16:21:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.740 ************************************ 00:05:19.740 START TEST accel_xor 00:05:19.740 ************************************ 00:05:19.740 16:21:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:19.740 16:21:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:19.740 [2024-07-15 16:21:59.127392] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:19.740 [2024-07-15 16:21:59.127459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402744 ] 00:05:19.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.740 [2024-07-15 16:21:59.191013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.740 [2024-07-15 16:21:59.313664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.999 16:21:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.371 00:05:21.371 real 0m1.490s 00:05:21.371 user 0m1.344s 00:05:21.371 sys 0m0.148s 00:05:21.371 16:22:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.371 16:22:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:21.371 ************************************ 00:05:21.371 END TEST accel_xor 00:05:21.371 ************************************ 00:05:21.371 16:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.371 16:22:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:21.371 16:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:21.371 16:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.371 16:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.371 ************************************ 00:05:21.371 START TEST accel_xor 00:05:21.371 ************************************ 00:05:21.371 16:22:00 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:21.371 [2024-07-15 16:22:00.664127] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:21.371 [2024-07-15 16:22:00.664194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402957 ] 00:05:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.371 [2024-07-15 16:22:00.726453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.371 [2024-07-15 16:22:00.850409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.371 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.372 16:22:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:22.741 16:22:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.741 00:05:22.741 real 0m1.483s 00:05:22.741 user 0m1.345s 00:05:22.741 sys 0m0.140s 00:05:22.741 16:22:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.741 16:22:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:22.741 ************************************ 00:05:22.741 END TEST accel_xor 00:05:22.741 ************************************ 00:05:22.741 16:22:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.741 16:22:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:22.741 16:22:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:22.741 16:22:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.741 16:22:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.741 ************************************ 00:05:22.741 START TEST accel_dif_verify 00:05:22.741 ************************************ 00:05:22.741 16:22:02 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:22.741 16:22:02 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:22.741 [2024-07-15 16:22:02.193955] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:22.741 [2024-07-15 16:22:02.194020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403180 ] 00:05:22.741 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.741 [2024-07-15 16:22:02.259708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.999 [2024-07-15 16:22:02.381476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:22.999 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.000 16:22:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.375 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:24.376 16:22:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.376 00:05:24.376 real 0m1.491s 00:05:24.376 user 0m1.351s 00:05:24.376 sys 0m0.144s 00:05:24.376 16:22:03 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.376 16:22:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:24.376 ************************************ 00:05:24.376 END TEST accel_dif_verify 00:05:24.376 ************************************ 00:05:24.376 16:22:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.376 16:22:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:24.376 16:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:24.376 16:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.376 16:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.376 ************************************ 00:05:24.376 START TEST accel_dif_generate 00:05:24.376 ************************************ 00:05:24.376 16:22:03 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:24.376 16:22:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:24.376 [2024-07-15 16:22:03.730575] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:24.376 [2024-07-15 16:22:03.730640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403332 ] 00:05:24.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.376 [2024-07-15 16:22:03.796610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.376 [2024-07-15 16:22:03.919498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.633 16:22:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:25.607 16:22:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.607 00:05:25.607 real 0m1.489s 00:05:25.607 user 0m1.347s 00:05:25.607 sys 0m0.146s 00:05:25.607 16:22:05 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.607 16:22:05 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:25.607 ************************************ 00:05:25.607 END TEST accel_dif_generate 00:05:25.607 ************************************ 00:05:25.870 16:22:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.870 16:22:05 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:25.870 16:22:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:25.870 16:22:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.870 16:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.870 ************************************ 00:05:25.870 START TEST accel_dif_generate_copy 00:05:25.870 ************************************ 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:25.870 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:25.870 [2024-07-15 16:22:05.268850] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:25.870 [2024-07-15 16:22:05.269007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403585 ] 00:05:25.870 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.870 [2024-07-15 16:22:05.335258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.870 [2024-07-15 16:22:05.458311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:26.128 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.129 16:22:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.501 00:05:27.501 real 0m1.488s 00:05:27.501 user 0m1.354s 00:05:27.501 sys 0m0.136s 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.501 16:22:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:27.501 ************************************ 00:05:27.501 END TEST accel_dif_generate_copy 00:05:27.501 ************************************ 00:05:27.501 16:22:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.501 16:22:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:27.501 16:22:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.501 16:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:27.501 16:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.501 16:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.501 ************************************ 00:05:27.501 START TEST accel_comp 00:05:27.501 ************************************ 00:05:27.501 16:22:06 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:27.501 16:22:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:27.501 [2024-07-15 16:22:06.798784] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:27.501 [2024-07-15 16:22:06.798852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403763 ] 00:05:27.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.501 [2024-07-15 16:22:06.861327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.501 [2024-07-15 16:22:06.979072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.501 16:22:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.881 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:28.882 16:22:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.882 00:05:28.882 real 0m1.478s 00:05:28.882 user 0m1.337s 00:05:28.882 sys 0m0.144s 00:05:28.882 16:22:08 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.882 16:22:08 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:28.882 ************************************ 00:05:28.882 END TEST accel_comp 00:05:28.882 ************************************ 00:05:28.882 16:22:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.882 16:22:08 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.882 16:22:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:28.882 16:22:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.882 16:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.882 ************************************ 00:05:28.882 START TEST accel_decomp 00:05:28.882 ************************************ 00:05:28.882 16:22:08 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:28.882 16:22:08 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:28.882 [2024-07-15 16:22:08.323027] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:28.882 [2024-07-15 16:22:08.323094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403927 ] 00:05:28.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.882 [2024-07-15 16:22:08.386759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.140 [2024-07-15 16:22:08.511343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.140 16:22:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.513 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:30.514 16:22:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.514 00:05:30.514 real 0m1.489s 00:05:30.514 user 0m1.345s 00:05:30.514 sys 0m0.147s 00:05:30.514 16:22:09 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.514 16:22:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 ************************************ 00:05:30.514 END TEST accel_decomp 00:05:30.514 ************************************ 00:05:30.514 16:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.514 16:22:09 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.514 16:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:30.514 16:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.514 16:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.514 ************************************ 00:05:30.514 START TEST accel_decomp_full 00:05:30.514 ************************************ 00:05:30.514 16:22:09 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:30.514 16:22:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:30.514 [2024-07-15 16:22:09.856299] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:30.514 [2024-07-15 16:22:09.856366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404193 ] 00:05:30.514 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.514 [2024-07-15 16:22:09.925203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.514 [2024-07-15 16:22:10.053163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.772 16:22:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.146 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.147 16:22:11 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:32.147 16:22:11 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.147 00:05:32.147 real 0m1.515s 00:05:32.147 user 0m1.367s 00:05:32.147 sys 0m0.151s 00:05:32.147 16:22:11 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.147 16:22:11 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:32.147 ************************************ 00:05:32.147 END TEST accel_decomp_full 00:05:32.147 ************************************ 00:05:32.147 16:22:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.147 16:22:11 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.147 16:22:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:32.147 16:22:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.147 16:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.147 ************************************ 00:05:32.147 START TEST accel_decomp_mcore 00:05:32.147 ************************************ 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:32.147 [2024-07-15 16:22:11.413664] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:32.147 [2024-07-15 16:22:11.413729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404358 ] 00:05:32.147 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.147 [2024-07-15 16:22:11.476568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.147 [2024-07-15 16:22:11.599684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.147 [2024-07-15 16:22:11.599737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.147 [2024-07-15 16:22:11.599790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.147 [2024-07-15 16:22:11.599793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.147 16:22:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.518 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.519 00:05:33.519 real 0m1.491s 00:05:33.519 user 0m4.799s 00:05:33.519 sys 0m0.149s 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.519 16:22:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:33.519 ************************************ 00:05:33.519 END TEST accel_decomp_mcore 00:05:33.519 ************************************ 00:05:33.519 16:22:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.519 16:22:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.519 16:22:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:33.519 16:22:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.519 16:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.519 ************************************ 00:05:33.519 START TEST accel_decomp_full_mcore 00:05:33.519 ************************************ 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:33.519 16:22:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:33.519 [2024-07-15 16:22:12.953274] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:33.519 [2024-07-15 16:22:12.953340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404517 ] 00:05:33.519 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.519 [2024-07-15 16:22:13.019953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.778 [2024-07-15 16:22:13.147395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.778 [2024-07-15 16:22:13.147473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.778 [2024-07-15 16:22:13.147476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.778 [2024-07-15 16:22:13.147420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:33.778 16:22:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.151 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.152 00:05:35.152 real 0m1.509s 00:05:35.152 user 0m4.840s 00:05:35.152 sys 0m0.154s 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.152 16:22:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 ************************************ 00:05:35.152 END TEST accel_decomp_full_mcore 00:05:35.152 ************************************ 00:05:35.152 16:22:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.152 16:22:14 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.152 16:22:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:35.152 16:22:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.152 16:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 ************************************ 00:05:35.152 START TEST accel_decomp_mthread 00:05:35.152 ************************************ 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:35.152 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:35.152 [2024-07-15 16:22:14.513012] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:35.152 [2024-07-15 16:22:14.513086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404794 ] 00:05:35.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.152 [2024-07-15 16:22:14.577841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.152 [2024-07-15 16:22:14.699865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.410 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.411 16:22:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.787 00:05:36.787 real 0m1.489s 00:05:36.787 user 0m1.343s 00:05:36.787 sys 0m0.149s 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.787 16:22:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:36.787 ************************************ 00:05:36.787 END TEST accel_decomp_mthread 00:05:36.787 ************************************ 00:05:36.787 16:22:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.787 16:22:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.787 16:22:16 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:36.787 16:22:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.787 16:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.787 ************************************ 00:05:36.787 START TEST accel_decomp_full_mthread 00:05:36.787 ************************************ 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:36.787 [2024-07-15 16:22:16.045788] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:36.787 [2024-07-15 16:22:16.045861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404954 ] 00:05:36.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.787 [2024-07-15 16:22:16.108159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.787 [2024-07-15 16:22:16.229677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:36.787 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:36.788 16:22:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.159 00:05:38.159 real 0m1.526s 00:05:38.159 user 0m1.382s 00:05:38.159 sys 0m0.147s 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.159 16:22:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:38.159 ************************************ 00:05:38.159 END TEST accel_decomp_full_mthread 00:05:38.159 ************************************ 00:05:38.159 16:22:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.159 16:22:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:38.159 16:22:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.159 16:22:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:38.159 16:22:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.159 16:22:17 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:38.159 16:22:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.159 16:22:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.159 16:22:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.159 16:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.159 16:22:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.159 16:22:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.159 16:22:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.159 16:22:17 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.159 ************************************ 00:05:38.159 START TEST accel_dif_functional_tests 00:05:38.159 ************************************ 00:05:38.159 16:22:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.159 [2024-07-15 16:22:17.645093] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:38.159 [2024-07-15 16:22:17.645173] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405115 ] 00:05:38.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.159 [2024-07-15 16:22:17.706740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.417 [2024-07-15 16:22:17.831308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.417 [2024-07-15 16:22:17.831359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.417 [2024-07-15 16:22:17.831363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.417 00:05:38.417 00:05:38.417 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.418 http://cunit.sourceforge.net/ 00:05:38.418 00:05:38.418 00:05:38.418 Suite: accel_dif 00:05:38.418 Test: verify: DIF generated, GUARD check ...passed 00:05:38.418 Test: verify: DIF generated, APPTAG check ...passed 00:05:38.418 Test: verify: DIF generated, REFTAG check ...passed 00:05:38.418 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:22:17.932502] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.418 passed 00:05:38.418 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:22:17.932578] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.418 passed 00:05:38.418 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:22:17.932618] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.418 passed 00:05:38.418 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:38.418 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:22:17.932698] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:38.418 passed 00:05:38.418 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:38.418 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:38.418 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:38.418 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:22:17.932856] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:38.418 passed 00:05:38.418 Test: verify copy: DIF generated, GUARD check ...passed 00:05:38.418 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:38.418 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:38.418 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:22:17.933042] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.418 passed 00:05:38.418 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:22:17.933084] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.418 passed 00:05:38.418 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:22:17.933125] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.418 passed 00:05:38.418 Test: generate copy: DIF generated, GUARD check ...passed 00:05:38.418 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:38.418 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:38.418 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:38.418 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:38.418 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:38.418 Test: generate copy: iovecs-len validate ...[2024-07-15 16:22:17.933382] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:38.418 passed 00:05:38.418 Test: generate copy: buffer alignment validate ...passed 00:05:38.418 00:05:38.418 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.418 suites 1 1 n/a 0 0 00:05:38.418 tests 26 26 26 0 0 00:05:38.418 asserts 115 115 115 0 n/a 00:05:38.418 00:05:38.418 Elapsed time = 0.003 seconds 00:05:38.677 00:05:38.677 real 0m0.598s 00:05:38.677 user 0m0.908s 00:05:38.677 sys 0m0.188s 00:05:38.677 16:22:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.677 16:22:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:38.677 ************************************ 00:05:38.677 END TEST accel_dif_functional_tests 00:05:38.677 ************************************ 00:05:38.677 16:22:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.677 00:05:38.677 real 0m34.175s 00:05:38.677 user 0m37.767s 00:05:38.677 sys 0m4.679s 00:05:38.677 16:22:18 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.677 16:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.677 ************************************ 00:05:38.677 END TEST accel 00:05:38.677 ************************************ 00:05:38.677 16:22:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.677 16:22:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:38.677 16:22:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.677 16:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.677 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.677 ************************************ 00:05:38.677 START TEST accel_rpc 00:05:38.677 ************************************ 00:05:38.677 16:22:18 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:38.936 * Looking for test storage... 00:05:38.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:38.936 16:22:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.936 16:22:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1405306 00:05:38.936 16:22:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:38.936 16:22:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1405306 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1405306 ']' 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.936 16:22:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.936 [2024-07-15 16:22:18.375412] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:38.936 [2024-07-15 16:22:18.375510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405306 ] 00:05:38.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.936 [2024-07-15 16:22:18.437367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.194 [2024-07-15 16:22:18.559155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.194 16:22:18 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.194 16:22:18 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.194 16:22:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:39.194 16:22:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:39.194 16:22:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:39.194 16:22:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:39.194 16:22:18 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:39.194 16:22:18 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.194 16:22:18 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.194 16:22:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.194 ************************************ 00:05:39.194 START TEST accel_assign_opcode 00:05:39.194 ************************************ 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.194 [2024-07-15 16:22:18.651844] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.194 [2024-07-15 16:22:18.659856] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.194 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.452 software 00:05:39.452 00:05:39.452 real 0m0.305s 00:05:39.452 user 0m0.042s 00:05:39.452 sys 0m0.010s 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.452 16:22:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.452 ************************************ 00:05:39.452 END TEST accel_assign_opcode 00:05:39.452 ************************************ 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:39.452 16:22:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1405306 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1405306 ']' 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1405306 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.452 16:22:18 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1405306 00:05:39.452 16:22:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.452 16:22:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.452 16:22:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1405306' 00:05:39.452 killing process with pid 1405306 00:05:39.452 16:22:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 1405306 00:05:39.452 16:22:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 1405306 00:05:40.017 00:05:40.017 real 0m1.207s 00:05:40.017 user 0m1.179s 00:05:40.017 sys 0m0.434s 00:05:40.017 16:22:19 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.017 16:22:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.017 ************************************ 00:05:40.017 END TEST accel_rpc 00:05:40.017 ************************************ 00:05:40.017 16:22:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.017 16:22:19 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.017 16:22:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.017 16:22:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.017 16:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.017 ************************************ 00:05:40.017 START TEST app_cmdline 00:05:40.017 ************************************ 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.017 * Looking for test storage... 00:05:40.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:40.017 16:22:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:40.017 16:22:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1405510 00:05:40.017 16:22:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:40.017 16:22:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1405510 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1405510 ']' 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.017 16:22:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.274 [2024-07-15 16:22:19.627620] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:40.274 [2024-07-15 16:22:19.627715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405510 ] 00:05:40.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.274 [2024-07-15 16:22:19.690041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.274 [2024-07-15 16:22:19.802057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.531 16:22:20 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.531 16:22:20 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:40.531 16:22:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:40.789 { 00:05:40.789 "version": "SPDK v24.09-pre git sha1 72fc6988f", 00:05:40.789 "fields": { 00:05:40.789 "major": 24, 00:05:40.789 "minor": 9, 00:05:40.789 "patch": 0, 00:05:40.789 "suffix": "-pre", 00:05:40.789 "commit": "72fc6988f" 00:05:40.789 } 00:05:40.789 } 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:40.789 16:22:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:40.789 16:22:20 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.047 request: 00:05:41.047 { 00:05:41.047 "method": "env_dpdk_get_mem_stats", 00:05:41.047 "req_id": 1 00:05:41.047 } 00:05:41.047 Got JSON-RPC error response 00:05:41.047 response: 00:05:41.047 { 00:05:41.047 "code": -32601, 00:05:41.047 "message": "Method not found" 00:05:41.047 } 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.047 16:22:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1405510 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1405510 ']' 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1405510 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.047 16:22:20 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1405510 00:05:41.304 16:22:20 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.304 16:22:20 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.304 16:22:20 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1405510' 00:05:41.304 killing process with pid 1405510 00:05:41.304 16:22:20 app_cmdline -- common/autotest_common.sh@967 -- # kill 1405510 00:05:41.304 16:22:20 app_cmdline -- common/autotest_common.sh@972 -- # wait 1405510 00:05:41.562 00:05:41.562 real 0m1.596s 00:05:41.562 user 0m1.948s 00:05:41.562 sys 0m0.459s 00:05:41.562 16:22:21 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.562 16:22:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.562 ************************************ 00:05:41.562 END TEST app_cmdline 00:05:41.562 ************************************ 00:05:41.562 16:22:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.562 16:22:21 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.562 16:22:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.562 16:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.562 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:41.821 ************************************ 00:05:41.821 START TEST version 00:05:41.821 ************************************ 00:05:41.821 16:22:21 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.821 * Looking for test storage... 00:05:41.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.821 16:22:21 version -- app/version.sh@17 -- # get_header_version major 00:05:41.821 16:22:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # cut -f2 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.821 16:22:21 version -- app/version.sh@17 -- # major=24 00:05:41.821 16:22:21 version -- app/version.sh@18 -- # get_header_version minor 00:05:41.821 16:22:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # cut -f2 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.821 16:22:21 version -- app/version.sh@18 -- # minor=9 00:05:41.821 16:22:21 version -- app/version.sh@19 -- # get_header_version patch 00:05:41.821 16:22:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # cut -f2 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.821 16:22:21 version -- app/version.sh@19 -- # patch=0 00:05:41.821 16:22:21 version -- app/version.sh@20 -- # get_header_version suffix 00:05:41.821 16:22:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # cut -f2 00:05:41.821 16:22:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.821 16:22:21 version -- app/version.sh@20 -- # suffix=-pre 00:05:41.821 16:22:21 version -- app/version.sh@22 -- # version=24.9 00:05:41.821 16:22:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:41.821 16:22:21 version -- app/version.sh@28 -- # version=24.9rc0 00:05:41.821 16:22:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.821 16:22:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:41.821 16:22:21 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:41.821 16:22:21 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:41.821 00:05:41.821 real 0m0.103s 00:05:41.821 user 0m0.056s 00:05:41.821 sys 0m0.069s 00:05:41.821 16:22:21 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.821 16:22:21 version -- common/autotest_common.sh@10 -- # set +x 00:05:41.821 ************************************ 00:05:41.821 END TEST version 00:05:41.821 ************************************ 00:05:41.821 16:22:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.821 16:22:21 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:41.821 16:22:21 -- spdk/autotest.sh@198 -- # uname -s 00:05:41.821 16:22:21 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:41.821 16:22:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:41.821 16:22:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:41.821 16:22:21 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:41.821 16:22:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:41.821 16:22:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:41.821 16:22:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.822 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:41.822 16:22:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:41.822 16:22:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:41.822 16:22:21 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:41.822 16:22:21 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:41.822 16:22:21 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:41.822 16:22:21 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:41.822 16:22:21 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.822 16:22:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:41.822 16:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.822 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:41.822 ************************************ 00:05:41.822 START TEST nvmf_tcp 00:05:41.822 ************************************ 00:05:41.822 16:22:21 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.822 * Looking for test storage... 00:05:41.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.822 16:22:21 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.083 16:22:21 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.083 16:22:21 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.083 16:22:21 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.083 16:22:21 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:42.083 16:22:21 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:42.083 16:22:21 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.083 16:22:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:42.083 16:22:21 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:42.083 16:22:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.083 16:22:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.083 16:22:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.083 ************************************ 00:05:42.083 START TEST nvmf_example 00:05:42.083 ************************************ 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:42.083 * Looking for test storage... 00:05:42.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:42.083 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:42.084 16:22:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:43.987 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:43.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:43.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:43.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:43.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:43.988 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:44.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:44.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:05:44.248 00:05:44.248 --- 10.0.0.2 ping statistics --- 00:05:44.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.248 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:44.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:44.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:05:44.248 00:05:44.248 --- 10.0.0.1 ping statistics --- 00:05:44.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.248 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1407525 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1407525 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1407525 ']' 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.248 16:22:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.182 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.183 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:45.183 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.183 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:45.484 16:22:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.468 Initializing NVMe Controllers 00:05:55.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:55.468 Initialization complete. Launching workers. 00:05:55.468 ======================================================== 00:05:55.468 Latency(us) 00:05:55.469 Device Information : IOPS MiB/s Average min max 00:05:55.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15166.47 59.24 4220.38 874.95 15417.49 00:05:55.469 ======================================================== 00:05:55.469 Total : 15166.47 59.24 4220.38 874.95 15417.49 00:05:55.469 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:55.469 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:55.469 rmmod nvme_tcp 00:05:55.727 rmmod nvme_fabrics 00:05:55.727 rmmod nvme_keyring 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1407525 ']' 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1407525 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1407525 ']' 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1407525 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407525 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407525' 00:05:55.727 killing process with pid 1407525 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1407525 00:05:55.727 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1407525 00:05:55.986 nvmf threads initialize successfully 00:05:55.986 bdev subsystem init successfully 00:05:55.986 created a nvmf target service 00:05:55.986 create targets's poll groups done 00:05:55.986 all subsystems of target started 00:05:55.986 nvmf target is running 00:05:55.986 all subsystems of target stopped 00:05:55.986 destroy targets's poll groups done 00:05:55.986 destroyed the nvmf target service 00:05:55.986 bdev subsystem finish successfully 00:05:55.986 nvmf threads destroy successfully 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:55.986 16:22:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.891 16:22:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:57.891 16:22:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:57.891 16:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.892 16:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:57.892 00:05:57.892 real 0m15.995s 00:05:57.892 user 0m45.365s 00:05:57.892 sys 0m3.207s 00:05:57.892 16:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.892 16:22:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:57.892 ************************************ 00:05:57.892 END TEST nvmf_example 00:05:57.892 ************************************ 00:05:57.892 16:22:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:57.892 16:22:37 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:57.892 16:22:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.892 16:22:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.892 16:22:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.153 ************************************ 00:05:58.153 START TEST nvmf_filesystem 00:05:58.154 ************************************ 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:58.154 * Looking for test storage... 00:05:58.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:58.154 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:58.155 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:58.155 #define SPDK_CONFIG_H 00:05:58.155 #define SPDK_CONFIG_APPS 1 00:05:58.155 #define SPDK_CONFIG_ARCH native 00:05:58.155 #undef SPDK_CONFIG_ASAN 00:05:58.155 #undef SPDK_CONFIG_AVAHI 00:05:58.155 #undef SPDK_CONFIG_CET 00:05:58.155 #define SPDK_CONFIG_COVERAGE 1 00:05:58.155 #define SPDK_CONFIG_CROSS_PREFIX 00:05:58.155 #undef SPDK_CONFIG_CRYPTO 00:05:58.155 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:58.155 #undef SPDK_CONFIG_CUSTOMOCF 00:05:58.155 #undef SPDK_CONFIG_DAOS 00:05:58.155 #define SPDK_CONFIG_DAOS_DIR 00:05:58.155 #define SPDK_CONFIG_DEBUG 1 00:05:58.155 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:58.155 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:58.155 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:58.155 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:58.155 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:58.155 #undef SPDK_CONFIG_DPDK_UADK 00:05:58.155 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:58.155 #define SPDK_CONFIG_EXAMPLES 1 00:05:58.155 #undef SPDK_CONFIG_FC 00:05:58.155 #define SPDK_CONFIG_FC_PATH 00:05:58.155 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:58.155 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:58.155 #undef SPDK_CONFIG_FUSE 00:05:58.155 #undef SPDK_CONFIG_FUZZER 00:05:58.155 #define SPDK_CONFIG_FUZZER_LIB 00:05:58.155 #undef SPDK_CONFIG_GOLANG 00:05:58.155 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:58.155 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:58.155 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:58.155 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:58.155 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:58.155 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:58.155 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:58.155 #define SPDK_CONFIG_IDXD 1 00:05:58.155 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:58.155 #undef SPDK_CONFIG_IPSEC_MB 00:05:58.155 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:58.155 #define SPDK_CONFIG_ISAL 1 00:05:58.155 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:58.155 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:58.155 #define SPDK_CONFIG_LIBDIR 00:05:58.155 #undef SPDK_CONFIG_LTO 00:05:58.155 #define SPDK_CONFIG_MAX_LCORES 128 00:05:58.155 #define SPDK_CONFIG_NVME_CUSE 1 00:05:58.155 #undef SPDK_CONFIG_OCF 00:05:58.155 #define SPDK_CONFIG_OCF_PATH 00:05:58.155 #define SPDK_CONFIG_OPENSSL_PATH 00:05:58.155 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:58.155 #define SPDK_CONFIG_PGO_DIR 00:05:58.155 #undef SPDK_CONFIG_PGO_USE 00:05:58.155 #define SPDK_CONFIG_PREFIX /usr/local 00:05:58.155 #undef SPDK_CONFIG_RAID5F 00:05:58.155 #undef SPDK_CONFIG_RBD 00:05:58.155 #define SPDK_CONFIG_RDMA 1 00:05:58.155 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:58.155 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:58.155 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:58.155 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:58.155 #define SPDK_CONFIG_SHARED 1 00:05:58.155 #undef SPDK_CONFIG_SMA 00:05:58.155 #define SPDK_CONFIG_TESTS 1 00:05:58.155 #undef SPDK_CONFIG_TSAN 00:05:58.155 #define SPDK_CONFIG_UBLK 1 00:05:58.155 #define SPDK_CONFIG_UBSAN 1 00:05:58.155 #undef SPDK_CONFIG_UNIT_TESTS 00:05:58.155 #undef SPDK_CONFIG_URING 00:05:58.155 #define SPDK_CONFIG_URING_PATH 00:05:58.155 #undef SPDK_CONFIG_URING_ZNS 00:05:58.155 #undef SPDK_CONFIG_USDT 00:05:58.155 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:58.155 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:58.155 #define SPDK_CONFIG_VFIO_USER 1 00:05:58.155 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:58.155 #define SPDK_CONFIG_VHOST 1 00:05:58.155 #define SPDK_CONFIG_VIRTIO 1 00:05:58.155 #undef SPDK_CONFIG_VTUNE 00:05:58.155 #define SPDK_CONFIG_VTUNE_DIR 00:05:58.155 #define SPDK_CONFIG_WERROR 1 00:05:58.156 #define SPDK_CONFIG_WPDK_DIR 00:05:58.156 #undef SPDK_CONFIG_XNVME 00:05:58.156 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:58.156 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:58.157 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:58.158 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1409238 ]] 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1409238 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.56gzMV 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.56gzMV/tests/target /tmp/spdk.56gzMV 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55519100928 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6475591680 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996291584 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1056768 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:58.159 * Looking for test storage... 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55519100928 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:58.159 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8690184192 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.160 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:58.161 16:22:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:00.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:00.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:00.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:00.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.690 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:00.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:06:00.691 00:06:00.691 --- 10.0.0.2 ping statistics --- 00:06:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.691 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:06:00.691 00:06:00.691 --- 10.0.0.1 ping statistics --- 00:06:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.691 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.691 ************************************ 00:06:00.691 START TEST nvmf_filesystem_no_in_capsule 00:06:00.691 ************************************ 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1410860 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1410860 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1410860 ']' 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.691 16:22:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.691 [2024-07-15 16:22:40.016599] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:00.691 [2024-07-15 16:22:40.016717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.691 [2024-07-15 16:22:40.087235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.691 [2024-07-15 16:22:40.210785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.691 [2024-07-15 16:22:40.210850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.691 [2024-07-15 16:22:40.210884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.691 [2024-07-15 16:22:40.210901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.691 [2024-07-15 16:22:40.210912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.691 [2024-07-15 16:22:40.210998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.691 [2024-07-15 16:22:40.211053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.691 [2024-07-15 16:22:40.211103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.691 [2024-07-15 16:22:40.211106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 [2024-07-15 16:22:40.991989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 Malloc1 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 [2024-07-15 16:22:41.177330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:01.622 { 00:06:01.622 "name": "Malloc1", 00:06:01.622 "aliases": [ 00:06:01.622 "0fe917c0-11f5-49f6-8441-1dd683c666af" 00:06:01.622 ], 00:06:01.622 "product_name": "Malloc disk", 00:06:01.622 "block_size": 512, 00:06:01.622 "num_blocks": 1048576, 00:06:01.622 "uuid": "0fe917c0-11f5-49f6-8441-1dd683c666af", 00:06:01.622 "assigned_rate_limits": { 00:06:01.622 "rw_ios_per_sec": 0, 00:06:01.622 "rw_mbytes_per_sec": 0, 00:06:01.622 "r_mbytes_per_sec": 0, 00:06:01.622 "w_mbytes_per_sec": 0 00:06:01.622 }, 00:06:01.622 "claimed": true, 00:06:01.622 "claim_type": "exclusive_write", 00:06:01.622 "zoned": false, 00:06:01.622 "supported_io_types": { 00:06:01.622 "read": true, 00:06:01.622 "write": true, 00:06:01.622 "unmap": true, 00:06:01.622 "flush": true, 00:06:01.622 "reset": true, 00:06:01.622 "nvme_admin": false, 00:06:01.622 "nvme_io": false, 00:06:01.622 "nvme_io_md": false, 00:06:01.622 "write_zeroes": true, 00:06:01.622 "zcopy": true, 00:06:01.622 "get_zone_info": false, 00:06:01.622 "zone_management": false, 00:06:01.622 "zone_append": false, 00:06:01.622 "compare": false, 00:06:01.622 "compare_and_write": false, 00:06:01.622 "abort": true, 00:06:01.622 "seek_hole": false, 00:06:01.622 "seek_data": false, 00:06:01.622 "copy": true, 00:06:01.622 "nvme_iov_md": false 00:06:01.622 }, 00:06:01.622 "memory_domains": [ 00:06:01.622 { 00:06:01.622 "dma_device_id": "system", 00:06:01.622 "dma_device_type": 1 00:06:01.622 }, 00:06:01.622 { 00:06:01.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.622 "dma_device_type": 2 00:06:01.622 } 00:06:01.622 ], 00:06:01.622 "driver_specific": {} 00:06:01.622 } 00:06:01.622 ]' 00:06:01.622 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:01.878 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:01.878 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:01.879 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:01.879 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:01.879 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:01.879 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:01.879 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:02.441 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:02.441 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:02.441 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:02.441 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:02.441 16:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:04.333 16:22:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:04.590 16:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:05.549 16:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.480 ************************************ 00:06:06.480 START TEST filesystem_ext4 00:06:06.480 ************************************ 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:06.480 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:06.480 mke2fs 1.46.5 (30-Dec-2021) 00:06:06.738 Discarding device blocks: 0/522240 done 00:06:06.738 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:06.738 Filesystem UUID: 41c91b9f-2166-41ca-b531-2488b939307e 00:06:06.738 Superblock backups stored on blocks: 00:06:06.738 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:06.738 00:06:06.738 Allocating group tables: 0/64 done 00:06:06.738 Writing inode tables: 0/64 done 00:06:06.738 Creating journal (8192 blocks): done 00:06:06.738 Writing superblocks and filesystem accounting information: 0/64 done 00:06:06.738 00:06:06.738 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:06.738 16:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1410860 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.669 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.670 00:06:07.670 real 0m1.165s 00:06:07.670 user 0m0.015s 00:06:07.670 sys 0m0.062s 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:07.670 ************************************ 00:06:07.670 END TEST filesystem_ext4 00:06:07.670 ************************************ 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.670 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.964 ************************************ 00:06:07.964 START TEST filesystem_btrfs 00:06:07.964 ************************************ 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:07.964 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:08.246 btrfs-progs v6.6.2 00:06:08.246 See https://btrfs.readthedocs.io for more information. 00:06:08.246 00:06:08.246 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:08.246 NOTE: several default settings have changed in version 5.15, please make sure 00:06:08.246 this does not affect your deployments: 00:06:08.246 - DUP for metadata (-m dup) 00:06:08.246 - enabled no-holes (-O no-holes) 00:06:08.246 - enabled free-space-tree (-R free-space-tree) 00:06:08.246 00:06:08.246 Label: (null) 00:06:08.246 UUID: 05653f8f-1106-47ca-9d62-2c8ff5990c58 00:06:08.246 Node size: 16384 00:06:08.246 Sector size: 4096 00:06:08.246 Filesystem size: 510.00MiB 00:06:08.246 Block group profiles: 00:06:08.246 Data: single 8.00MiB 00:06:08.246 Metadata: DUP 32.00MiB 00:06:08.246 System: DUP 8.00MiB 00:06:08.246 SSD detected: yes 00:06:08.246 Zoned device: no 00:06:08.246 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:08.246 Runtime features: free-space-tree 00:06:08.246 Checksum: crc32c 00:06:08.246 Number of devices: 1 00:06:08.246 Devices: 00:06:08.246 ID SIZE PATH 00:06:08.246 1 510.00MiB /dev/nvme0n1p1 00:06:08.246 00:06:08.246 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:08.246 16:22:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1410860 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.811 00:06:08.811 real 0m1.117s 00:06:08.811 user 0m0.015s 00:06:08.811 sys 0m0.136s 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.811 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:08.811 ************************************ 00:06:08.811 END TEST filesystem_btrfs 00:06:08.811 ************************************ 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.069 ************************************ 00:06:09.069 START TEST filesystem_xfs 00:06:09.069 ************************************ 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:09.069 16:22:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:09.069 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:09.069 = sectsz=512 attr=2, projid32bit=1 00:06:09.069 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:09.069 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:09.069 data = bsize=4096 blocks=130560, imaxpct=25 00:06:09.069 = sunit=0 swidth=0 blks 00:06:09.069 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:09.069 log =internal log bsize=4096 blocks=16384, version=2 00:06:09.069 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:09.069 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:09.999 Discarding blocks...Done. 00:06:09.999 16:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:10.257 16:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:12.787 00:06:12.787 real 0m3.743s 00:06:12.787 user 0m0.020s 00:06:12.787 sys 0m0.061s 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:12.787 ************************************ 00:06:12.787 END TEST filesystem_xfs 00:06:12.787 ************************************ 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:12.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1410860 ']' 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410860' 00:06:12.787 killing process with pid 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1410860 00:06:12.787 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1410860 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:13.353 00:06:13.353 real 0m12.885s 00:06:13.353 user 0m49.535s 00:06:13.353 sys 0m1.871s 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.353 ************************************ 00:06:13.353 END TEST nvmf_filesystem_no_in_capsule 00:06:13.353 ************************************ 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.353 ************************************ 00:06:13.353 START TEST nvmf_filesystem_in_capsule 00:06:13.353 ************************************ 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:13.353 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1412564 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1412564 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1412564 ']' 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.354 16:22:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.613 [2024-07-15 16:22:52.954503] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:13.613 [2024-07-15 16:22:52.954601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.613 [2024-07-15 16:22:53.023657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.613 [2024-07-15 16:22:53.146323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.613 [2024-07-15 16:22:53.146386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.613 [2024-07-15 16:22:53.146412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.613 [2024-07-15 16:22:53.146425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.613 [2024-07-15 16:22:53.146437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.613 [2024-07-15 16:22:53.146500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.613 [2024-07-15 16:22:53.146535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.613 [2024-07-15 16:22:53.146567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.613 [2024-07-15 16:22:53.146570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 [2024-07-15 16:22:53.926016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 Malloc1 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 [2024-07-15 16:22:54.109465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:14.545 { 00:06:14.545 "name": "Malloc1", 00:06:14.545 "aliases": [ 00:06:14.545 "dafb8127-941a-400e-974d-0ae49a5c7b05" 00:06:14.545 ], 00:06:14.545 "product_name": "Malloc disk", 00:06:14.545 "block_size": 512, 00:06:14.545 "num_blocks": 1048576, 00:06:14.545 "uuid": "dafb8127-941a-400e-974d-0ae49a5c7b05", 00:06:14.545 "assigned_rate_limits": { 00:06:14.545 "rw_ios_per_sec": 0, 00:06:14.545 "rw_mbytes_per_sec": 0, 00:06:14.545 "r_mbytes_per_sec": 0, 00:06:14.545 "w_mbytes_per_sec": 0 00:06:14.545 }, 00:06:14.545 "claimed": true, 00:06:14.545 "claim_type": "exclusive_write", 00:06:14.545 "zoned": false, 00:06:14.545 "supported_io_types": { 00:06:14.545 "read": true, 00:06:14.545 "write": true, 00:06:14.545 "unmap": true, 00:06:14.545 "flush": true, 00:06:14.545 "reset": true, 00:06:14.545 "nvme_admin": false, 00:06:14.545 "nvme_io": false, 00:06:14.545 "nvme_io_md": false, 00:06:14.545 "write_zeroes": true, 00:06:14.545 "zcopy": true, 00:06:14.545 "get_zone_info": false, 00:06:14.545 "zone_management": false, 00:06:14.545 "zone_append": false, 00:06:14.546 "compare": false, 00:06:14.546 "compare_and_write": false, 00:06:14.546 "abort": true, 00:06:14.546 "seek_hole": false, 00:06:14.546 "seek_data": false, 00:06:14.546 "copy": true, 00:06:14.546 "nvme_iov_md": false 00:06:14.546 }, 00:06:14.546 "memory_domains": [ 00:06:14.546 { 00:06:14.546 "dma_device_id": "system", 00:06:14.546 "dma_device_type": 1 00:06:14.546 }, 00:06:14.546 { 00:06:14.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.546 "dma_device_type": 2 00:06:14.546 } 00:06:14.546 ], 00:06:14.546 "driver_specific": {} 00:06:14.546 } 00:06:14.546 ]' 00:06:14.546 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:14.802 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:15.367 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:15.367 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:15.367 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:15.367 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:15.367 16:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:17.890 16:22:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:18.171 16:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:19.101 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:19.101 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:19.101 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:19.101 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.101 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 ************************************ 00:06:19.359 START TEST filesystem_in_capsule_ext4 00:06:19.359 ************************************ 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:19.359 16:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:19.359 mke2fs 1.46.5 (30-Dec-2021) 00:06:19.359 Discarding device blocks: 0/522240 done 00:06:19.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:19.359 Filesystem UUID: 5ac02abb-86b4-4a88-98c6-8fd85c77b322 00:06:19.359 Superblock backups stored on blocks: 00:06:19.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:19.359 00:06:19.359 Allocating group tables: 0/64 done 00:06:19.359 Writing inode tables: 0/64 done 00:06:19.615 Creating journal (8192 blocks): done 00:06:20.545 Writing superblocks and filesystem accounting information: 0/64 done 00:06:20.545 00:06:20.545 16:22:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:20.545 16:22:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1412564 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.477 00:06:21.477 real 0m2.198s 00:06:21.477 user 0m0.022s 00:06:21.477 sys 0m0.045s 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 ************************************ 00:06:21.477 END TEST filesystem_in_capsule_ext4 00:06:21.477 ************************************ 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 ************************************ 00:06:21.477 START TEST filesystem_in_capsule_btrfs 00:06:21.477 ************************************ 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:21.477 16:23:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:22.041 btrfs-progs v6.6.2 00:06:22.041 See https://btrfs.readthedocs.io for more information. 00:06:22.041 00:06:22.041 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:22.041 NOTE: several default settings have changed in version 5.15, please make sure 00:06:22.041 this does not affect your deployments: 00:06:22.041 - DUP for metadata (-m dup) 00:06:22.041 - enabled no-holes (-O no-holes) 00:06:22.041 - enabled free-space-tree (-R free-space-tree) 00:06:22.041 00:06:22.041 Label: (null) 00:06:22.041 UUID: 0877d71a-eb1f-4eb0-9a6a-e2c18b7fbe0a 00:06:22.041 Node size: 16384 00:06:22.041 Sector size: 4096 00:06:22.041 Filesystem size: 510.00MiB 00:06:22.041 Block group profiles: 00:06:22.041 Data: single 8.00MiB 00:06:22.041 Metadata: DUP 32.00MiB 00:06:22.041 System: DUP 8.00MiB 00:06:22.041 SSD detected: yes 00:06:22.041 Zoned device: no 00:06:22.041 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:22.041 Runtime features: free-space-tree 00:06:22.041 Checksum: crc32c 00:06:22.041 Number of devices: 1 00:06:22.041 Devices: 00:06:22.041 ID SIZE PATH 00:06:22.041 1 510.00MiB /dev/nvme0n1p1 00:06:22.041 00:06:22.041 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:22.041 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1412564 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:22.299 00:06:22.299 real 0m0.820s 00:06:22.299 user 0m0.031s 00:06:22.299 sys 0m0.112s 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:22.299 ************************************ 00:06:22.299 END TEST filesystem_in_capsule_btrfs 00:06:22.299 ************************************ 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.299 ************************************ 00:06:22.299 START TEST filesystem_in_capsule_xfs 00:06:22.299 ************************************ 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:22.299 16:23:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:22.557 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:22.557 = sectsz=512 attr=2, projid32bit=1 00:06:22.557 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:22.557 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:22.557 data = bsize=4096 blocks=130560, imaxpct=25 00:06:22.557 = sunit=0 swidth=0 blks 00:06:22.557 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:22.557 log =internal log bsize=4096 blocks=16384, version=2 00:06:22.557 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:22.557 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:23.487 Discarding blocks...Done. 00:06:23.487 16:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:23.487 16:23:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1412564 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:26.014 00:06:26.014 real 0m3.574s 00:06:26.014 user 0m0.026s 00:06:26.014 sys 0m0.048s 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:26.014 ************************************ 00:06:26.014 END TEST filesystem_in_capsule_xfs 00:06:26.014 ************************************ 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:26.014 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:26.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1412564 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1412564 ']' 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1412564 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1412564 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1412564' 00:06:26.272 killing process with pid 1412564 00:06:26.272 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1412564 00:06:26.273 16:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1412564 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:26.858 00:06:26.858 real 0m13.433s 00:06:26.858 user 0m51.710s 00:06:26.858 sys 0m1.825s 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.858 ************************************ 00:06:26.858 END TEST nvmf_filesystem_in_capsule 00:06:26.858 ************************************ 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:26.858 rmmod nvme_tcp 00:06:26.858 rmmod nvme_fabrics 00:06:26.858 rmmod nvme_keyring 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.858 16:23:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.396 16:23:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:29.396 00:06:29.396 real 0m30.963s 00:06:29.396 user 1m42.195s 00:06:29.396 sys 0m5.395s 00:06:29.396 16:23:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.396 16:23:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.396 ************************************ 00:06:29.396 END TEST nvmf_filesystem 00:06:29.396 ************************************ 00:06:29.396 16:23:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:29.396 16:23:08 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:29.396 16:23:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.396 16:23:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.396 16:23:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.396 ************************************ 00:06:29.396 START TEST nvmf_target_discovery 00:06:29.396 ************************************ 00:06:29.396 16:23:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:29.396 * Looking for test storage... 00:06:29.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.396 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.396 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:29.396 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:29.397 16:23:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:31.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:31.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:31.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:31.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:31.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:06:31.300 00:06:31.300 --- 10.0.0.2 ping statistics --- 00:06:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.300 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:31.300 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:06:31.301 00:06:31.301 --- 10.0.0.1 ping statistics --- 00:06:31.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.301 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:06:31.301 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.301 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:31.301 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1416302 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1416302 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1416302 ']' 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.560 16:23:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.560 [2024-07-15 16:23:10.978290] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:31.560 [2024-07-15 16:23:10.978386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.560 [2024-07-15 16:23:11.060323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.818 [2024-07-15 16:23:11.186519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.818 [2024-07-15 16:23:11.186577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.818 [2024-07-15 16:23:11.186593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.818 [2024-07-15 16:23:11.186616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.818 [2024-07-15 16:23:11.186629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.818 [2024-07-15 16:23:11.186712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.818 [2024-07-15 16:23:11.186771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.818 [2024-07-15 16:23:11.186819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.818 [2024-07-15 16:23:11.186822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 [2024-07-15 16:23:11.347592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 Null1 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 [2024-07-15 16:23:11.387918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 Null2 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.818 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 Null3 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 Null4 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:32.077 00:06:32.077 Discovery Log Number of Records 6, Generation counter 6 00:06:32.077 =====Discovery Log Entry 0====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: current discovery subsystem 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4420 00:06:32.077 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: explicit discovery connections, duplicate discovery information 00:06:32.077 sectype: none 00:06:32.077 =====Discovery Log Entry 1====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: nvme subsystem 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4420 00:06:32.077 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: none 00:06:32.077 sectype: none 00:06:32.077 =====Discovery Log Entry 2====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: nvme subsystem 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4420 00:06:32.077 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: none 00:06:32.077 sectype: none 00:06:32.077 =====Discovery Log Entry 3====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: nvme subsystem 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4420 00:06:32.077 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: none 00:06:32.077 sectype: none 00:06:32.077 =====Discovery Log Entry 4====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: nvme subsystem 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4420 00:06:32.077 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: none 00:06:32.077 sectype: none 00:06:32.077 =====Discovery Log Entry 5====== 00:06:32.077 trtype: tcp 00:06:32.077 adrfam: ipv4 00:06:32.077 subtype: discovery subsystem referral 00:06:32.077 treq: not required 00:06:32.077 portid: 0 00:06:32.077 trsvcid: 4430 00:06:32.077 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:32.077 traddr: 10.0.0.2 00:06:32.077 eflags: none 00:06:32.077 sectype: none 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:32.077 Perform nvmf subsystem discovery via RPC 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.077 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.077 [ 00:06:32.077 { 00:06:32.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:32.077 "subtype": "Discovery", 00:06:32.077 "listen_addresses": [ 00:06:32.077 { 00:06:32.077 "trtype": "TCP", 00:06:32.077 "adrfam": "IPv4", 00:06:32.077 "traddr": "10.0.0.2", 00:06:32.077 "trsvcid": "4420" 00:06:32.077 } 00:06:32.077 ], 00:06:32.077 "allow_any_host": true, 00:06:32.077 "hosts": [] 00:06:32.077 }, 00:06:32.077 { 00:06:32.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:32.077 "subtype": "NVMe", 00:06:32.077 "listen_addresses": [ 00:06:32.077 { 00:06:32.077 "trtype": "TCP", 00:06:32.077 "adrfam": "IPv4", 00:06:32.077 "traddr": "10.0.0.2", 00:06:32.077 "trsvcid": "4420" 00:06:32.077 } 00:06:32.077 ], 00:06:32.077 "allow_any_host": true, 00:06:32.077 "hosts": [], 00:06:32.077 "serial_number": "SPDK00000000000001", 00:06:32.077 "model_number": "SPDK bdev Controller", 00:06:32.077 "max_namespaces": 32, 00:06:32.077 "min_cntlid": 1, 00:06:32.077 "max_cntlid": 65519, 00:06:32.077 "namespaces": [ 00:06:32.077 { 00:06:32.077 "nsid": 1, 00:06:32.077 "bdev_name": "Null1", 00:06:32.077 "name": "Null1", 00:06:32.077 "nguid": "820C90A7A8DE43EE8E664B2BF2227858", 00:06:32.077 "uuid": "820c90a7-a8de-43ee-8e66-4b2bf2227858" 00:06:32.077 } 00:06:32.077 ] 00:06:32.077 }, 00:06:32.077 { 00:06:32.077 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:32.077 "subtype": "NVMe", 00:06:32.077 "listen_addresses": [ 00:06:32.077 { 00:06:32.077 "trtype": "TCP", 00:06:32.077 "adrfam": "IPv4", 00:06:32.077 "traddr": "10.0.0.2", 00:06:32.077 "trsvcid": "4420" 00:06:32.077 } 00:06:32.077 ], 00:06:32.077 "allow_any_host": true, 00:06:32.077 "hosts": [], 00:06:32.077 "serial_number": "SPDK00000000000002", 00:06:32.077 "model_number": "SPDK bdev Controller", 00:06:32.077 "max_namespaces": 32, 00:06:32.077 "min_cntlid": 1, 00:06:32.077 "max_cntlid": 65519, 00:06:32.077 "namespaces": [ 00:06:32.077 { 00:06:32.077 "nsid": 1, 00:06:32.077 "bdev_name": "Null2", 00:06:32.077 "name": "Null2", 00:06:32.077 "nguid": "74D8F4AF44204F679566500B3B3A0508", 00:06:32.077 "uuid": "74d8f4af-4420-4f67-9566-500b3b3a0508" 00:06:32.077 } 00:06:32.077 ] 00:06:32.077 }, 00:06:32.077 { 00:06:32.077 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:32.077 "subtype": "NVMe", 00:06:32.077 "listen_addresses": [ 00:06:32.077 { 00:06:32.077 "trtype": "TCP", 00:06:32.077 "adrfam": "IPv4", 00:06:32.077 "traddr": "10.0.0.2", 00:06:32.077 "trsvcid": "4420" 00:06:32.077 } 00:06:32.077 ], 00:06:32.077 "allow_any_host": true, 00:06:32.077 "hosts": [], 00:06:32.077 "serial_number": "SPDK00000000000003", 00:06:32.077 "model_number": "SPDK bdev Controller", 00:06:32.077 "max_namespaces": 32, 00:06:32.077 "min_cntlid": 1, 00:06:32.077 "max_cntlid": 65519, 00:06:32.077 "namespaces": [ 00:06:32.077 { 00:06:32.077 "nsid": 1, 00:06:32.077 "bdev_name": "Null3", 00:06:32.077 "name": "Null3", 00:06:32.077 "nguid": "421E54619636440BA6B51DF9E3FC4D27", 00:06:32.078 "uuid": "421e5461-9636-440b-a6b5-1df9e3fc4d27" 00:06:32.078 } 00:06:32.078 ] 00:06:32.078 }, 00:06:32.078 { 00:06:32.078 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:32.078 "subtype": "NVMe", 00:06:32.078 "listen_addresses": [ 00:06:32.078 { 00:06:32.078 "trtype": "TCP", 00:06:32.078 "adrfam": "IPv4", 00:06:32.078 "traddr": "10.0.0.2", 00:06:32.078 "trsvcid": "4420" 00:06:32.078 } 00:06:32.078 ], 00:06:32.078 "allow_any_host": true, 00:06:32.078 "hosts": [], 00:06:32.078 "serial_number": "SPDK00000000000004", 00:06:32.078 "model_number": "SPDK bdev Controller", 00:06:32.078 "max_namespaces": 32, 00:06:32.078 "min_cntlid": 1, 00:06:32.078 "max_cntlid": 65519, 00:06:32.078 "namespaces": [ 00:06:32.078 { 00:06:32.078 "nsid": 1, 00:06:32.078 "bdev_name": "Null4", 00:06:32.078 "name": "Null4", 00:06:32.078 "nguid": "D3A80C3621104AABA2721A91182F7D74", 00:06:32.078 "uuid": "d3a80c36-2110-4aab-a272-1a91182f7d74" 00:06:32.078 } 00:06:32.078 ] 00:06:32.078 } 00:06:32.078 ] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.078 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:32.337 rmmod nvme_tcp 00:06:32.337 rmmod nvme_fabrics 00:06:32.337 rmmod nvme_keyring 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1416302 ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1416302 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1416302 ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1416302 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416302 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416302' 00:06:32.337 killing process with pid 1416302 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1416302 00:06:32.337 16:23:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1416302 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.595 16:23:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.127 16:23:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:35.127 00:06:35.127 real 0m5.616s 00:06:35.127 user 0m4.311s 00:06:35.127 sys 0m1.963s 00:06:35.127 16:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.127 16:23:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 ************************************ 00:06:35.127 END TEST nvmf_target_discovery 00:06:35.127 ************************************ 00:06:35.127 16:23:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:35.127 16:23:14 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:35.127 16:23:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.127 16:23:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.127 16:23:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.127 ************************************ 00:06:35.127 START TEST nvmf_referrals 00:06:35.127 ************************************ 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:35.127 * Looking for test storage... 00:06:35.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.127 16:23:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.128 16:23:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:37.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:37.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.053 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:37.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:37.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:37.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:06:37.054 00:06:37.054 --- 10.0.0.2 ping statistics --- 00:06:37.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.054 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:06:37.054 00:06:37.054 --- 10.0.0.1 ping statistics --- 00:06:37.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.054 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1418389 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1418389 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1418389 ']' 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.054 16:23:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.054 [2024-07-15 16:23:16.465480] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:37.054 [2024-07-15 16:23:16.465563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.054 [2024-07-15 16:23:16.533257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.312 [2024-07-15 16:23:16.654001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.312 [2024-07-15 16:23:16.654057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.312 [2024-07-15 16:23:16.654073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.312 [2024-07-15 16:23:16.654086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.312 [2024-07-15 16:23:16.654098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.312 [2024-07-15 16:23:16.654186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.312 [2024-07-15 16:23:16.654221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.312 [2024-07-15 16:23:16.654271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.312 [2024-07-15 16:23:16.654275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.890 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.890 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:37.890 16:23:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:37.890 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.890 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 [2024-07-15 16:23:17.496259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 [2024-07-15 16:23:17.508417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:38.147 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:38.404 16:23:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:38.661 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:38.917 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:38.918 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.175 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.432 16:23:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:39.432 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:39.689 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:39.689 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:39.689 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:39.689 16:23:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:39.690 rmmod nvme_tcp 00:06:39.690 rmmod nvme_fabrics 00:06:39.690 rmmod nvme_keyring 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1418389 ']' 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1418389 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1418389 ']' 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1418389 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1418389 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1418389' 00:06:39.690 killing process with pid 1418389 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1418389 00:06:39.690 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1418389 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.948 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:39.949 16:23:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.480 16:23:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.480 00:06:42.480 real 0m7.313s 00:06:42.480 user 0m12.614s 00:06:42.480 sys 0m2.139s 00:06:42.480 16:23:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.480 16:23:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 END TEST nvmf_referrals 00:06:42.480 ************************************ 00:06:42.480 16:23:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:42.480 16:23:21 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:42.480 16:23:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.480 16:23:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.480 16:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 START TEST nvmf_connect_disconnect 00:06:42.480 ************************************ 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:42.480 * Looking for test storage... 00:06:42.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.480 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.481 16:23:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:44.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:44.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.379 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:44.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:44.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:44.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:06:44.380 00:06:44.380 --- 10.0.0.2 ping statistics --- 00:06:44.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.380 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:06:44.380 00:06:44.380 --- 10.0.0.1 ping statistics --- 00:06:44.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.380 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1420718 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1420718 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1420718 ']' 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.380 16:23:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:44.380 [2024-07-15 16:23:23.865201] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:44.380 [2024-07-15 16:23:23.865290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.380 [2024-07-15 16:23:23.935075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.639 [2024-07-15 16:23:24.055358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.639 [2024-07-15 16:23:24.055423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.639 [2024-07-15 16:23:24.055439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.639 [2024-07-15 16:23:24.055452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.639 [2024-07-15 16:23:24.055464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.639 [2024-07-15 16:23:24.055567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.639 [2024-07-15 16:23:24.055627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.639 [2024-07-15 16:23:24.055683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.639 [2024-07-15 16:23:24.055686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 [2024-07-15 16:23:24.871185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:45.612 [2024-07-15 16:23:24.922231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:45.612 16:23:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:48.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:59.754 rmmod nvme_tcp 00:06:59.754 rmmod nvme_fabrics 00:06:59.754 rmmod nvme_keyring 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1420718 ']' 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1420718 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1420718 ']' 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1420718 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1420718 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1420718' 00:06:59.754 killing process with pid 1420718 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1420718 00:06:59.754 16:23:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1420718 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.754 16:23:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.663 16:23:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:01.663 00:07:01.663 real 0m19.595s 00:07:01.663 user 0m59.675s 00:07:01.663 sys 0m3.351s 00:07:01.663 16:23:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.663 16:23:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.663 ************************************ 00:07:01.663 END TEST nvmf_connect_disconnect 00:07:01.663 ************************************ 00:07:01.663 16:23:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:01.663 16:23:41 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:01.663 16:23:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.663 16:23:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.663 16:23:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.663 ************************************ 00:07:01.663 START TEST nvmf_multitarget 00:07:01.663 ************************************ 00:07:01.663 16:23:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:01.663 * Looking for test storage... 00:07:01.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.663 16:23:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.663 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:01.663 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.664 16:23:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.924 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.924 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.924 16:23:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.924 16:23:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.833 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:03.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:03.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:03.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:03.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:03.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:07:03.834 00:07:03.834 --- 10.0.0.2 ping statistics --- 00:07:03.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.834 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:03.834 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:04.093 00:07:04.093 --- 10.0.0.1 ping statistics --- 00:07:04.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.093 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1424592 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1424592 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1424592 ']' 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.093 16:23:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.093 [2024-07-15 16:23:43.512709] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:04.093 [2024-07-15 16:23:43.512794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.093 [2024-07-15 16:23:43.581760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.351 [2024-07-15 16:23:43.703130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.351 [2024-07-15 16:23:43.703196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.351 [2024-07-15 16:23:43.703213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.351 [2024-07-15 16:23:43.703226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.351 [2024-07-15 16:23:43.703237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.351 [2024-07-15 16:23:43.703303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.351 [2024-07-15 16:23:43.703360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.351 [2024-07-15 16:23:43.703413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.351 [2024-07-15 16:23:43.703416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:04.916 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:05.174 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:05.174 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:05.174 "nvmf_tgt_1" 00:07:05.174 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:05.431 "nvmf_tgt_2" 00:07:05.431 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:05.431 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:05.431 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:05.431 16:23:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:05.689 true 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:05.689 true 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:05.689 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:05.689 rmmod nvme_tcp 00:07:05.947 rmmod nvme_fabrics 00:07:05.947 rmmod nvme_keyring 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1424592 ']' 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1424592 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1424592 ']' 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1424592 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1424592 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1424592' 00:07:05.947 killing process with pid 1424592 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1424592 00:07:05.947 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1424592 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.207 16:23:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.139 16:23:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:08.139 00:07:08.139 real 0m6.475s 00:07:08.139 user 0m9.262s 00:07:08.139 sys 0m1.920s 00:07:08.139 16:23:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.139 16:23:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:08.139 ************************************ 00:07:08.139 END TEST nvmf_multitarget 00:07:08.139 ************************************ 00:07:08.139 16:23:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:08.139 16:23:47 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:08.139 16:23:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.139 16:23:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.139 16:23:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.139 ************************************ 00:07:08.139 START TEST nvmf_rpc 00:07:08.139 ************************************ 00:07:08.139 16:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:08.397 * Looking for test storage... 00:07:08.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.397 16:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:10.305 00:07:10.305 --- 10.0.0.2 ping statistics --- 00:07:10.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.305 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:10.305 00:07:10.305 --- 10.0.0.1 ping statistics --- 00:07:10.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.305 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.305 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1426698 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1426698 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1426698 ']' 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.306 16:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.564 [2024-07-15 16:23:49.911252] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:10.564 [2024-07-15 16:23:49.911339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.564 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.564 [2024-07-15 16:23:49.983258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.564 [2024-07-15 16:23:50.111635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.564 [2024-07-15 16:23:50.111694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.564 [2024-07-15 16:23:50.111711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.564 [2024-07-15 16:23:50.111725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.564 [2024-07-15 16:23:50.111736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.564 [2024-07-15 16:23:50.111832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.564 [2024-07-15 16:23:50.111924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.564 [2024-07-15 16:23:50.111951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.564 [2024-07-15 16:23:50.111955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.496 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.496 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:11.496 16:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.496 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:11.497 "tick_rate": 2700000000, 00:07:11.497 "poll_groups": [ 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_000", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_001", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_002", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_003", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [] 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 }' 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.497 [2024-07-15 16:23:50.987266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.497 16:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:11.497 "tick_rate": 2700000000, 00:07:11.497 "poll_groups": [ 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_000", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [ 00:07:11.497 { 00:07:11.497 "trtype": "TCP" 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_001", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [ 00:07:11.497 { 00:07:11.497 "trtype": "TCP" 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_002", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [ 00:07:11.497 { 00:07:11.497 "trtype": "TCP" 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 }, 00:07:11.497 { 00:07:11.497 "name": "nvmf_tgt_poll_group_003", 00:07:11.497 "admin_qpairs": 0, 00:07:11.497 "io_qpairs": 0, 00:07:11.497 "current_admin_qpairs": 0, 00:07:11.497 "current_io_qpairs": 0, 00:07:11.497 "pending_bdev_io": 0, 00:07:11.497 "completed_nvme_io": 0, 00:07:11.497 "transports": [ 00:07:11.497 { 00:07:11.497 "trtype": "TCP" 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 } 00:07:11.497 ] 00:07:11.497 }' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.497 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 Malloc1 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 [2024-07-15 16:23:51.143747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:11.755 [2024-07-15 16:23:51.166212] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:11.755 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:11.755 could not add new controller: failed to write to nvme-fabrics device 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.755 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.319 16:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.319 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.319 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.319 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.319 16:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:14.846 16:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.846 [2024-07-15 16:23:53.996031] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:14.846 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:14.846 could not add new controller: failed to write to nvme-fabrics device 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.846 16:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.104 16:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.104 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.104 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.104 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.104 16:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.648 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 [2024-07-15 16:23:56.784553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.649 16:23:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.906 16:23:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.906 16:23:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.906 16:23:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.906 16:23:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.906 16:23:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 [2024-07-15 16:23:59.605036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.429 16:23:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.995 16:24:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.995 16:24:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.995 16:24:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.995 16:24:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.995 16:24:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 [2024-07-15 16:24:02.430261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.892 16:24:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.823 16:24:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.823 16:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:23.823 16:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.823 16:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:23.823 16:24:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 [2024-07-15 16:24:05.255375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.721 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.739 16:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.739 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.739 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.739 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.739 16:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:28.673 16:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 [2024-07-15 16:24:08.070646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.673 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.238 16:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.238 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.238 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.238 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.238 16:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.134 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 [2024-07-15 16:24:10.800335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.393 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 [2024-07-15 16:24:10.848376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 [2024-07-15 16:24:10.896538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 [2024-07-15 16:24:10.944700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.394 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.652 [2024-07-15 16:24:10.992850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.652 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.652 16:24:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.652 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.652 16:24:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.652 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:31.653 "tick_rate": 2700000000, 00:07:31.653 "poll_groups": [ 00:07:31.653 { 00:07:31.653 "name": "nvmf_tgt_poll_group_000", 00:07:31.653 "admin_qpairs": 2, 00:07:31.653 "io_qpairs": 84, 00:07:31.653 "current_admin_qpairs": 0, 00:07:31.653 "current_io_qpairs": 0, 00:07:31.653 "pending_bdev_io": 0, 00:07:31.653 "completed_nvme_io": 184, 00:07:31.653 "transports": [ 00:07:31.653 { 00:07:31.653 "trtype": "TCP" 00:07:31.653 } 00:07:31.653 ] 00:07:31.653 }, 00:07:31.653 { 00:07:31.653 "name": "nvmf_tgt_poll_group_001", 00:07:31.653 "admin_qpairs": 2, 00:07:31.653 "io_qpairs": 84, 00:07:31.653 "current_admin_qpairs": 0, 00:07:31.653 "current_io_qpairs": 0, 00:07:31.653 "pending_bdev_io": 0, 00:07:31.653 "completed_nvme_io": 133, 00:07:31.653 "transports": [ 00:07:31.653 { 00:07:31.653 "trtype": "TCP" 00:07:31.653 } 00:07:31.653 ] 00:07:31.653 }, 00:07:31.653 { 00:07:31.653 "name": "nvmf_tgt_poll_group_002", 00:07:31.653 "admin_qpairs": 1, 00:07:31.653 "io_qpairs": 84, 00:07:31.653 "current_admin_qpairs": 0, 00:07:31.653 "current_io_qpairs": 0, 00:07:31.653 "pending_bdev_io": 0, 00:07:31.653 "completed_nvme_io": 135, 00:07:31.653 "transports": [ 00:07:31.653 { 00:07:31.653 "trtype": "TCP" 00:07:31.653 } 00:07:31.653 ] 00:07:31.653 }, 00:07:31.653 { 00:07:31.653 "name": "nvmf_tgt_poll_group_003", 00:07:31.653 "admin_qpairs": 2, 00:07:31.653 "io_qpairs": 84, 00:07:31.653 "current_admin_qpairs": 0, 00:07:31.653 "current_io_qpairs": 0, 00:07:31.653 "pending_bdev_io": 0, 00:07:31.653 "completed_nvme_io": 234, 00:07:31.653 "transports": [ 00:07:31.653 { 00:07:31.653 "trtype": "TCP" 00:07:31.653 } 00:07:31.653 ] 00:07:31.653 } 00:07:31.653 ] 00:07:31.653 }' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.653 rmmod nvme_tcp 00:07:31.653 rmmod nvme_fabrics 00:07:31.653 rmmod nvme_keyring 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1426698 ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1426698 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1426698 ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1426698 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426698 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426698' 00:07:31.653 killing process with pid 1426698 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1426698 00:07:31.653 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1426698 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.221 16:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.126 16:24:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.126 00:07:34.126 real 0m25.847s 00:07:34.126 user 1m24.791s 00:07:34.126 sys 0m4.066s 00:07:34.126 16:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.126 16:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.126 ************************************ 00:07:34.126 END TEST nvmf_rpc 00:07:34.126 ************************************ 00:07:34.126 16:24:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:34.126 16:24:13 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:34.126 16:24:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.126 16:24:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.126 16:24:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.126 ************************************ 00:07:34.126 START TEST nvmf_invalid 00:07:34.126 ************************************ 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:34.126 * Looking for test storage... 00:07:34.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.126 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.127 16:24:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:36.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:36.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:36.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.657 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:36.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:36.658 00:07:36.658 --- 10.0.0.2 ping statistics --- 00:07:36.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.658 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:36.658 00:07:36.658 --- 10.0.0.1 ping statistics --- 00:07:36.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.658 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1431947 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1431947 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1431947 ']' 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.658 16:24:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.658 [2024-07-15 16:24:16.006048] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:36.658 [2024-07-15 16:24:16.006146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.658 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.658 [2024-07-15 16:24:16.071463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.658 [2024-07-15 16:24:16.187500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.658 [2024-07-15 16:24:16.187548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.658 [2024-07-15 16:24:16.187576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.658 [2024-07-15 16:24:16.187588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.658 [2024-07-15 16:24:16.187597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.658 [2024-07-15 16:24:16.187686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.658 [2024-07-15 16:24:16.187752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.658 [2024-07-15 16:24:16.187801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.658 [2024-07-15 16:24:16.187804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:36.916 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2209 00:07:37.174 [2024-07-15 16:24:16.622530] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:37.174 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:37.174 { 00:07:37.174 "nqn": "nqn.2016-06.io.spdk:cnode2209", 00:07:37.174 "tgt_name": "foobar", 00:07:37.174 "method": "nvmf_create_subsystem", 00:07:37.174 "req_id": 1 00:07:37.174 } 00:07:37.174 Got JSON-RPC error response 00:07:37.174 response: 00:07:37.174 { 00:07:37.174 "code": -32603, 00:07:37.174 "message": "Unable to find target foobar" 00:07:37.174 }' 00:07:37.174 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:37.174 { 00:07:37.174 "nqn": "nqn.2016-06.io.spdk:cnode2209", 00:07:37.174 "tgt_name": "foobar", 00:07:37.174 "method": "nvmf_create_subsystem", 00:07:37.174 "req_id": 1 00:07:37.174 } 00:07:37.174 Got JSON-RPC error response 00:07:37.174 response: 00:07:37.174 { 00:07:37.174 "code": -32603, 00:07:37.174 "message": "Unable to find target foobar" 00:07:37.174 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:37.174 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:37.174 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3967 00:07:37.431 [2024-07-15 16:24:16.911525] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3967: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:37.431 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:37.431 { 00:07:37.431 "nqn": "nqn.2016-06.io.spdk:cnode3967", 00:07:37.431 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:37.431 "method": "nvmf_create_subsystem", 00:07:37.431 "req_id": 1 00:07:37.431 } 00:07:37.431 Got JSON-RPC error response 00:07:37.431 response: 00:07:37.431 { 00:07:37.431 "code": -32602, 00:07:37.431 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:37.431 }' 00:07:37.431 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:37.431 { 00:07:37.431 "nqn": "nqn.2016-06.io.spdk:cnode3967", 00:07:37.431 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:37.431 "method": "nvmf_create_subsystem", 00:07:37.431 "req_id": 1 00:07:37.431 } 00:07:37.431 Got JSON-RPC error response 00:07:37.431 response: 00:07:37.431 { 00:07:37.431 "code": -32602, 00:07:37.431 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:37.431 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:37.431 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:37.431 16:24:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18371 00:07:37.689 [2024-07-15 16:24:17.152286] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18371: invalid model number 'SPDK_Controller' 00:07:37.689 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:37.689 { 00:07:37.689 "nqn": "nqn.2016-06.io.spdk:cnode18371", 00:07:37.689 "model_number": "SPDK_Controller\u001f", 00:07:37.689 "method": "nvmf_create_subsystem", 00:07:37.689 "req_id": 1 00:07:37.689 } 00:07:37.689 Got JSON-RPC error response 00:07:37.689 response: 00:07:37.689 { 00:07:37.689 "code": -32602, 00:07:37.689 "message": "Invalid MN SPDK_Controller\u001f" 00:07:37.689 }' 00:07:37.689 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:37.689 { 00:07:37.689 "nqn": "nqn.2016-06.io.spdk:cnode18371", 00:07:37.689 "model_number": "SPDK_Controller\u001f", 00:07:37.689 "method": "nvmf_create_subsystem", 00:07:37.689 "req_id": 1 00:07:37.689 } 00:07:37.689 Got JSON-RPC error response 00:07:37.689 response: 00:07:37.689 { 00:07:37.689 "code": -32602, 00:07:37.689 "message": "Invalid MN SPDK_Controller\u001f" 00:07:37.689 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:37.689 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:37.689 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:37.689 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '3L&"&W6X0F$N}9_75sl$)' 00:07:37.690 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '3L&"&W6X0F$N}9_75sl$)' nqn.2016-06.io.spdk:cnode20978 00:07:37.948 [2024-07-15 16:24:17.505478] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20978: invalid serial number '3L&"&W6X0F$N}9_75sl$)' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:37.948 { 00:07:37.948 "nqn": "nqn.2016-06.io.spdk:cnode20978", 00:07:37.948 "serial_number": "3L&\"&W6X0F$N}9_75sl$)", 00:07:37.948 "method": "nvmf_create_subsystem", 00:07:37.948 "req_id": 1 00:07:37.948 } 00:07:37.948 Got JSON-RPC error response 00:07:37.948 response: 00:07:37.948 { 00:07:37.948 "code": -32602, 00:07:37.948 "message": "Invalid SN 3L&\"&W6X0F$N}9_75sl$)" 00:07:37.948 }' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:37.948 { 00:07:37.948 "nqn": "nqn.2016-06.io.spdk:cnode20978", 00:07:37.948 "serial_number": "3L&\"&W6X0F$N}9_75sl$)", 00:07:37.948 "method": "nvmf_create_subsystem", 00:07:37.948 "req_id": 1 00:07:37.948 } 00:07:37.948 Got JSON-RPC error response 00:07:37.948 response: 00:07:37.948 { 00:07:37.948 "code": -32602, 00:07:37.948 "message": "Invalid SN 3L&\"&W6X0F$N}9_75sl$)" 00:07:37.948 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:37.948 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:38.207 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:07:38.208 16:24:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_(; /dev/null' 00:07:40.818 16:24:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.353 16:24:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.353 00:07:43.353 real 0m8.796s 00:07:43.353 user 0m20.208s 00:07:43.353 sys 0m2.485s 00:07:43.353 16:24:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.353 16:24:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:43.353 ************************************ 00:07:43.353 END TEST nvmf_invalid 00:07:43.353 ************************************ 00:07:43.353 16:24:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.353 16:24:22 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:43.353 16:24:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.353 16:24:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.353 16:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.353 ************************************ 00:07:43.353 START TEST nvmf_abort 00:07:43.353 ************************************ 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:43.353 * Looking for test storage... 00:07:43.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.353 16:24:22 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.354 16:24:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:07:45.286 00:07:45.286 --- 10.0.0.2 ping statistics --- 00:07:45.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.286 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:07:45.286 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:07:45.286 00:07:45.286 --- 10.0.0.1 ping statistics --- 00:07:45.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.287 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1434574 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1434574 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1434574 ']' 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.287 16:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.287 [2024-07-15 16:24:24.686503] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:45.287 [2024-07-15 16:24:24.686583] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.287 [2024-07-15 16:24:24.756299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.287 [2024-07-15 16:24:24.876901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.287 [2024-07-15 16:24:24.876980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.287 [2024-07-15 16:24:24.876996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.287 [2024-07-15 16:24:24.877024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.287 [2024-07-15 16:24:24.877034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.287 [2024-07-15 16:24:24.877109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.287 [2024-07-15 16:24:24.877135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.287 [2024-07-15 16:24:24.877138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 [2024-07-15 16:24:25.696442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 Malloc0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 Delay0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 [2024-07-15 16:24:25.762204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.222 16:24:25 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:46.222 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.480 [2024-07-15 16:24:25.868096] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:49.007 Initializing NVMe Controllers 00:07:49.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:49.007 controller IO queue size 128 less than required 00:07:49.007 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:49.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:49.007 Initialization complete. Launching workers. 00:07:49.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31391 00:07:49.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31452, failed to submit 62 00:07:49.007 success 31395, unsuccess 57, failed 0 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.007 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.007 rmmod nvme_tcp 00:07:49.008 rmmod nvme_fabrics 00:07:49.008 rmmod nvme_keyring 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1434574 ']' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1434574 ']' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1434574' 00:07:49.008 killing process with pid 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1434574 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.008 16:24:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.913 16:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.913 00:07:50.913 real 0m8.043s 00:07:50.913 user 0m13.083s 00:07:50.913 sys 0m2.604s 00:07:50.913 16:24:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.913 16:24:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.913 ************************************ 00:07:50.913 END TEST nvmf_abort 00:07:50.913 ************************************ 00:07:51.171 16:24:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.171 16:24:30 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.171 16:24:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.171 16:24:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.171 16:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.171 ************************************ 00:07:51.171 START TEST nvmf_ns_hotplug_stress 00:07:51.171 ************************************ 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.171 * Looking for test storage... 00:07:51.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.171 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.172 16:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.093 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.093 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.093 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:07:53.352 00:07:53.352 --- 10.0.0.2 ping statistics --- 00:07:53.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.352 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:53.352 00:07:53.352 --- 10.0.0.1 ping statistics --- 00:07:53.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.352 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1436936 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1436936 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1436936 ']' 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.352 16:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:53.352 [2024-07-15 16:24:32.874594] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:53.352 [2024-07-15 16:24:32.874692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.352 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.352 [2024-07-15 16:24:32.942762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.610 [2024-07-15 16:24:33.061508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.610 [2024-07-15 16:24:33.061573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.610 [2024-07-15 16:24:33.061590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.610 [2024-07-15 16:24:33.061603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.610 [2024-07-15 16:24:33.061615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.610 [2024-07-15 16:24:33.061698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.610 [2024-07-15 16:24:33.061758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.610 [2024-07-15 16:24:33.061761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:54.559 16:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.559 [2024-07-15 16:24:34.088982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.559 16:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:54.815 16:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.072 [2024-07-15 16:24:34.583624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.072 16:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.329 16:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:55.586 Malloc0 00:07:55.586 16:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:55.844 Delay0 00:07:55.844 16:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.101 16:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:56.358 NULL1 00:07:56.358 16:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:56.615 16:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1437359 00:07:56.615 16:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:56.615 16:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:07:56.615 16:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.985 Read completed with error (sct=0, sc=11) 00:07:57.985 16:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.242 16:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:58.242 16:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:58.499 true 00:07:58.499 16:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:07:58.499 16:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.062 16:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.333 16:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:59.333 16:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:59.590 true 00:07:59.590 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:07:59.590 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.848 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.106 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:00.106 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:00.363 true 00:08:00.363 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:00.363 16:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.326 16:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.589 16:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:01.589 16:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:01.846 true 00:08:01.846 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:01.846 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.104 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.362 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:02.362 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:02.362 true 00:08:02.362 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:02.362 16:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.293 16:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.551 16:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:03.551 16:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:03.808 true 00:08:03.808 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:03.808 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.065 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.322 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:04.322 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:04.579 true 00:08:04.579 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:04.579 16:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.145 16:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.710 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:05.710 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:05.710 true 00:08:05.710 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:05.710 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.968 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.226 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:06.226 16:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:06.483 true 00:08:06.483 16:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:06.483 16:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.415 16:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.673 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:07.673 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:07.930 true 00:08:07.930 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:07.930 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.188 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.446 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:08.446 16:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:08.704 true 00:08:08.704 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:08.704 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.962 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.228 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:09.228 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:09.488 true 00:08:09.488 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:09.488 16:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.419 16:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.677 16:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:10.677 16:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:10.935 true 00:08:10.935 16:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:10.935 16:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.864 16:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.122 16:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:12.122 16:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:12.379 true 00:08:12.379 16:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:12.379 16:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.635 16:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.892 16:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:12.892 16:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:13.149 true 00:08:13.149 16:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:13.149 16:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.077 16:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.077 16:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:14.077 16:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:14.333 true 00:08:14.333 16:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:14.333 16:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.590 16:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.847 16:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:14.847 16:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:15.104 true 00:08:15.104 16:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:15.104 16:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.038 16:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.330 16:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:16.330 16:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:16.588 true 00:08:16.588 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:16.588 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.846 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.104 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:17.104 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:17.362 true 00:08:17.362 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:17.362 16:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.293 16:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.293 16:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:18.293 16:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:18.551 true 00:08:18.551 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:18.551 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.808 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.066 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:19.066 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:19.324 true 00:08:19.324 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:19.324 16:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.514 16:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.772 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:20.772 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:21.030 true 00:08:21.030 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:21.030 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.288 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.546 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:21.546 16:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:21.546 true 00:08:21.803 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:21.803 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.803 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.061 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:22.061 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:22.318 true 00:08:22.318 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:22.318 16:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 16:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.689 16:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:23.689 16:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:23.946 true 00:08:23.946 16:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:23.946 16:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.877 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.134 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:25.134 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:25.134 true 00:08:25.134 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:25.134 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.391 16:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.649 16:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:25.649 16:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:25.906 true 00:08:25.906 16:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:25.906 16:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.840 Initializing NVMe Controllers 00:08:26.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.840 Controller IO queue size 128, less than required. 00:08:26.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.840 Controller IO queue size 128, less than required. 00:08:26.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:26.840 Initialization complete. Launching workers. 00:08:26.840 ======================================================== 00:08:26.840 Latency(us) 00:08:26.840 Device Information : IOPS MiB/s Average min max 00:08:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1116.60 0.55 64849.10 2433.42 1083678.37 00:08:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11941.25 5.83 10718.81 2774.70 442535.43 00:08:26.840 ======================================================== 00:08:26.840 Total : 13057.85 6.38 15347.58 2433.42 1083678.37 00:08:26.840 00:08:26.840 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.097 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:27.097 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:27.355 true 00:08:27.355 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1437359 00:08:27.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1437359) - No such process 00:08:27.355 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1437359 00:08:27.355 16:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.612 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.870 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:27.870 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:27.870 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:27.870 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.870 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:28.127 null0 00:08:28.127 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.127 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.127 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:28.385 null1 00:08:28.385 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.385 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.385 16:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:28.642 null2 00:08:28.642 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.642 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.642 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:28.899 null3 00:08:28.899 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.899 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.899 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:29.157 null4 00:08:29.157 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.157 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.157 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:29.415 null5 00:08:29.415 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.415 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.415 16:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:29.673 null6 00:08:29.673 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.673 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.673 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:29.931 null7 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:29.931 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1441426 1441427 1441429 1441431 1441433 1441435 1441437 1441439 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.932 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.190 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.450 16:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.708 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.978 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.237 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.496 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.496 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.753 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.010 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.011 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.268 16:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.524 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.525 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.525 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.525 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.525 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.782 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.040 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.297 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.555 16:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.815 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.074 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.333 16:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.593 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.850 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.851 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.108 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.367 rmmod nvme_tcp 00:08:35.367 rmmod nvme_fabrics 00:08:35.367 rmmod nvme_keyring 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1436936 ']' 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1436936 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1436936 ']' 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1436936 00:08:35.367 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436936 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436936' 00:08:35.625 killing process with pid 1436936 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1436936 00:08:35.625 16:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1436936 00:08:35.884 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.884 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.884 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.885 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.885 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.885 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.885 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.885 16:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.787 16:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.787 00:08:37.787 real 0m46.768s 00:08:37.787 user 3m31.695s 00:08:37.787 sys 0m16.291s 00:08:37.787 16:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.787 16:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.787 ************************************ 00:08:37.787 END TEST nvmf_ns_hotplug_stress 00:08:37.787 ************************************ 00:08:37.787 16:25:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.787 16:25:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:37.787 16:25:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.787 16:25:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.787 16:25:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.787 ************************************ 00:08:37.787 START TEST nvmf_connect_stress 00:08:37.787 ************************************ 00:08:37.787 16:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:38.045 * Looking for test storage... 00:08:38.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.045 16:25:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.045 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.046 16:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.947 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:39.948 00:08:39.948 --- 10.0.0.2 ping statistics --- 00:08:39.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.948 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:39.948 00:08:39.948 --- 10.0.0.1 ping statistics --- 00:08:39.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.948 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1444184 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1444184 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1444184 ']' 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.948 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 [2024-07-15 16:25:19.571955] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:40.207 [2024-07-15 16:25:19.572022] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.207 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.207 [2024-07-15 16:25:19.638394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.207 [2024-07-15 16:25:19.747079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.207 [2024-07-15 16:25:19.747135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.207 [2024-07-15 16:25:19.747172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.207 [2024-07-15 16:25:19.747184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.207 [2024-07-15 16:25:19.747194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.207 [2024-07-15 16:25:19.747267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.207 [2024-07-15 16:25:19.747344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.207 [2024-07-15 16:25:19.747348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 [2024-07-15 16:25:19.894324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 [2024-07-15 16:25:19.921063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 NULL1 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1444210 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:40.466 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.467 16:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.725 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.725 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:40.725 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.725 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.725 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.291 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.291 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:41.291 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.291 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.291 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.548 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:41.548 16:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.548 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.548 16:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.806 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.806 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:41.806 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.806 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.806 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.063 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:42.063 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.063 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.063 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.321 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.321 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:42.321 16:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.321 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.321 16:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.885 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.885 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:42.885 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.885 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.885 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.142 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.142 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:43.142 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.143 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.143 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.400 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.400 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:43.400 16:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.400 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.400 16:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.658 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.658 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:43.658 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.658 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.658 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.220 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.220 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:44.220 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.220 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.220 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.503 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.503 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:44.503 16:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.503 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.503 16:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.759 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.759 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:44.759 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.759 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.759 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.016 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.016 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:45.016 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.016 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.016 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.274 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.274 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:45.274 16:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.274 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.274 16:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.532 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.532 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:45.532 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.532 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.532 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.165 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.165 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:46.165 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.165 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.165 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.426 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.426 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:46.426 16:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.426 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.426 16:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.684 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.684 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:46.684 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.684 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.684 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.943 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.943 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:46.943 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.943 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.943 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.200 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.200 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:47.200 16:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.200 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.200 16:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.458 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.458 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:47.458 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.458 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.458 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.023 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.023 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:48.023 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.023 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.023 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.280 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.280 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:48.280 16:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.280 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.280 16:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.538 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.538 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:48.538 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.538 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.538 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.795 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.795 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:48.795 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.795 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.795 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.360 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.360 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:49.360 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.360 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.360 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.617 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.617 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:49.617 16:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.617 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.617 16:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.875 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.875 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:49.875 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.875 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.875 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.132 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.132 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:50.132 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.132 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.132 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.389 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.389 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:50.389 16:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.389 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.389 16:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.647 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1444210 00:08:50.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1444210) - No such process 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1444210 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.906 rmmod nvme_tcp 00:08:50.906 rmmod nvme_fabrics 00:08:50.906 rmmod nvme_keyring 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1444184 ']' 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1444184 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1444184 ']' 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1444184 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1444184 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1444184' 00:08:50.906 killing process with pid 1444184 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1444184 00:08:50.906 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1444184 00:08:51.165 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.165 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.165 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.166 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.166 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.166 16:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.166 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.166 16:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.071 16:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.071 00:08:53.071 real 0m15.292s 00:08:53.071 user 0m38.165s 00:08:53.071 sys 0m5.953s 00:08:53.071 16:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.071 16:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.071 ************************************ 00:08:53.071 END TEST nvmf_connect_stress 00:08:53.071 ************************************ 00:08:53.330 16:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.330 16:25:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:53.330 16:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.330 16:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.330 16:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.330 ************************************ 00:08:53.330 START TEST nvmf_fused_ordering 00:08:53.330 ************************************ 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:53.330 * Looking for test storage... 00:08:53.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.330 16:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:55.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:55.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:55.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.231 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:55.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:08:55.232 00:08:55.232 --- 10.0.0.2 ping statistics --- 00:08:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.232 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:55.232 00:08:55.232 --- 10.0.0.1 ping statistics --- 00:08:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.232 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:55.232 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1447417 00:08:55.490 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1447417 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1447417 ']' 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.491 16:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 [2024-07-15 16:25:34.900910] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:55.491 [2024-07-15 16:25:34.901006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.491 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.491 [2024-07-15 16:25:34.973731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.749 [2024-07-15 16:25:35.089078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.749 [2024-07-15 16:25:35.089132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.749 [2024-07-15 16:25:35.089158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.749 [2024-07-15 16:25:35.089172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.749 [2024-07-15 16:25:35.089183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.749 [2024-07-15 16:25:35.089223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 [2024-07-15 16:25:35.864977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 [2024-07-15 16:25:35.881100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 NULL1 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.314 16:25:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:56.572 [2024-07-15 16:25:35.928536] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:56.573 [2024-07-15 16:25:35.928580] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447522 ] 00:08:56.573 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.138 Attached to nqn.2016-06.io.spdk:cnode1 00:08:57.138 Namespace ID: 1 size: 1GB 00:08:57.138 fused_ordering(0) 00:08:57.138 fused_ordering(1) 00:08:57.138 fused_ordering(2) 00:08:57.138 fused_ordering(3) 00:08:57.138 fused_ordering(4) 00:08:57.138 fused_ordering(5) 00:08:57.138 fused_ordering(6) 00:08:57.138 fused_ordering(7) 00:08:57.138 fused_ordering(8) 00:08:57.138 fused_ordering(9) 00:08:57.138 fused_ordering(10) 00:08:57.138 fused_ordering(11) 00:08:57.138 fused_ordering(12) 00:08:57.138 fused_ordering(13) 00:08:57.138 fused_ordering(14) 00:08:57.138 fused_ordering(15) 00:08:57.138 fused_ordering(16) 00:08:57.138 fused_ordering(17) 00:08:57.138 fused_ordering(18) 00:08:57.138 fused_ordering(19) 00:08:57.138 fused_ordering(20) 00:08:57.138 fused_ordering(21) 00:08:57.138 fused_ordering(22) 00:08:57.138 fused_ordering(23) 00:08:57.138 fused_ordering(24) 00:08:57.138 fused_ordering(25) 00:08:57.138 fused_ordering(26) 00:08:57.138 fused_ordering(27) 00:08:57.138 fused_ordering(28) 00:08:57.138 fused_ordering(29) 00:08:57.138 fused_ordering(30) 00:08:57.138 fused_ordering(31) 00:08:57.138 fused_ordering(32) 00:08:57.138 fused_ordering(33) 00:08:57.138 fused_ordering(34) 00:08:57.138 fused_ordering(35) 00:08:57.138 fused_ordering(36) 00:08:57.138 fused_ordering(37) 00:08:57.138 fused_ordering(38) 00:08:57.138 fused_ordering(39) 00:08:57.138 fused_ordering(40) 00:08:57.138 fused_ordering(41) 00:08:57.138 fused_ordering(42) 00:08:57.138 fused_ordering(43) 00:08:57.138 fused_ordering(44) 00:08:57.138 fused_ordering(45) 00:08:57.138 fused_ordering(46) 00:08:57.138 fused_ordering(47) 00:08:57.138 fused_ordering(48) 00:08:57.138 fused_ordering(49) 00:08:57.138 fused_ordering(50) 00:08:57.138 fused_ordering(51) 00:08:57.138 fused_ordering(52) 00:08:57.138 fused_ordering(53) 00:08:57.138 fused_ordering(54) 00:08:57.138 fused_ordering(55) 00:08:57.138 fused_ordering(56) 00:08:57.138 fused_ordering(57) 00:08:57.138 fused_ordering(58) 00:08:57.138 fused_ordering(59) 00:08:57.138 fused_ordering(60) 00:08:57.138 fused_ordering(61) 00:08:57.138 fused_ordering(62) 00:08:57.138 fused_ordering(63) 00:08:57.138 fused_ordering(64) 00:08:57.138 fused_ordering(65) 00:08:57.138 fused_ordering(66) 00:08:57.138 fused_ordering(67) 00:08:57.138 fused_ordering(68) 00:08:57.138 fused_ordering(69) 00:08:57.138 fused_ordering(70) 00:08:57.138 fused_ordering(71) 00:08:57.138 fused_ordering(72) 00:08:57.138 fused_ordering(73) 00:08:57.138 fused_ordering(74) 00:08:57.138 fused_ordering(75) 00:08:57.138 fused_ordering(76) 00:08:57.138 fused_ordering(77) 00:08:57.138 fused_ordering(78) 00:08:57.138 fused_ordering(79) 00:08:57.138 fused_ordering(80) 00:08:57.138 fused_ordering(81) 00:08:57.138 fused_ordering(82) 00:08:57.138 fused_ordering(83) 00:08:57.138 fused_ordering(84) 00:08:57.138 fused_ordering(85) 00:08:57.138 fused_ordering(86) 00:08:57.138 fused_ordering(87) 00:08:57.138 fused_ordering(88) 00:08:57.138 fused_ordering(89) 00:08:57.138 fused_ordering(90) 00:08:57.138 fused_ordering(91) 00:08:57.138 fused_ordering(92) 00:08:57.138 fused_ordering(93) 00:08:57.138 fused_ordering(94) 00:08:57.138 fused_ordering(95) 00:08:57.138 fused_ordering(96) 00:08:57.138 fused_ordering(97) 00:08:57.138 fused_ordering(98) 00:08:57.138 fused_ordering(99) 00:08:57.138 fused_ordering(100) 00:08:57.138 fused_ordering(101) 00:08:57.138 fused_ordering(102) 00:08:57.138 fused_ordering(103) 00:08:57.138 fused_ordering(104) 00:08:57.138 fused_ordering(105) 00:08:57.138 fused_ordering(106) 00:08:57.138 fused_ordering(107) 00:08:57.138 fused_ordering(108) 00:08:57.138 fused_ordering(109) 00:08:57.138 fused_ordering(110) 00:08:57.138 fused_ordering(111) 00:08:57.138 fused_ordering(112) 00:08:57.138 fused_ordering(113) 00:08:57.138 fused_ordering(114) 00:08:57.138 fused_ordering(115) 00:08:57.138 fused_ordering(116) 00:08:57.138 fused_ordering(117) 00:08:57.138 fused_ordering(118) 00:08:57.138 fused_ordering(119) 00:08:57.138 fused_ordering(120) 00:08:57.138 fused_ordering(121) 00:08:57.138 fused_ordering(122) 00:08:57.138 fused_ordering(123) 00:08:57.138 fused_ordering(124) 00:08:57.138 fused_ordering(125) 00:08:57.138 fused_ordering(126) 00:08:57.138 fused_ordering(127) 00:08:57.138 fused_ordering(128) 00:08:57.138 fused_ordering(129) 00:08:57.138 fused_ordering(130) 00:08:57.138 fused_ordering(131) 00:08:57.138 fused_ordering(132) 00:08:57.138 fused_ordering(133) 00:08:57.138 fused_ordering(134) 00:08:57.138 fused_ordering(135) 00:08:57.138 fused_ordering(136) 00:08:57.138 fused_ordering(137) 00:08:57.138 fused_ordering(138) 00:08:57.138 fused_ordering(139) 00:08:57.138 fused_ordering(140) 00:08:57.138 fused_ordering(141) 00:08:57.138 fused_ordering(142) 00:08:57.138 fused_ordering(143) 00:08:57.138 fused_ordering(144) 00:08:57.138 fused_ordering(145) 00:08:57.138 fused_ordering(146) 00:08:57.138 fused_ordering(147) 00:08:57.138 fused_ordering(148) 00:08:57.138 fused_ordering(149) 00:08:57.138 fused_ordering(150) 00:08:57.138 fused_ordering(151) 00:08:57.138 fused_ordering(152) 00:08:57.138 fused_ordering(153) 00:08:57.138 fused_ordering(154) 00:08:57.138 fused_ordering(155) 00:08:57.138 fused_ordering(156) 00:08:57.138 fused_ordering(157) 00:08:57.138 fused_ordering(158) 00:08:57.138 fused_ordering(159) 00:08:57.138 fused_ordering(160) 00:08:57.138 fused_ordering(161) 00:08:57.138 fused_ordering(162) 00:08:57.138 fused_ordering(163) 00:08:57.138 fused_ordering(164) 00:08:57.138 fused_ordering(165) 00:08:57.138 fused_ordering(166) 00:08:57.138 fused_ordering(167) 00:08:57.138 fused_ordering(168) 00:08:57.138 fused_ordering(169) 00:08:57.138 fused_ordering(170) 00:08:57.138 fused_ordering(171) 00:08:57.138 fused_ordering(172) 00:08:57.138 fused_ordering(173) 00:08:57.138 fused_ordering(174) 00:08:57.138 fused_ordering(175) 00:08:57.138 fused_ordering(176) 00:08:57.138 fused_ordering(177) 00:08:57.138 fused_ordering(178) 00:08:57.138 fused_ordering(179) 00:08:57.138 fused_ordering(180) 00:08:57.138 fused_ordering(181) 00:08:57.138 fused_ordering(182) 00:08:57.138 fused_ordering(183) 00:08:57.138 fused_ordering(184) 00:08:57.138 fused_ordering(185) 00:08:57.138 fused_ordering(186) 00:08:57.138 fused_ordering(187) 00:08:57.138 fused_ordering(188) 00:08:57.138 fused_ordering(189) 00:08:57.138 fused_ordering(190) 00:08:57.138 fused_ordering(191) 00:08:57.138 fused_ordering(192) 00:08:57.138 fused_ordering(193) 00:08:57.138 fused_ordering(194) 00:08:57.138 fused_ordering(195) 00:08:57.138 fused_ordering(196) 00:08:57.138 fused_ordering(197) 00:08:57.138 fused_ordering(198) 00:08:57.138 fused_ordering(199) 00:08:57.138 fused_ordering(200) 00:08:57.138 fused_ordering(201) 00:08:57.138 fused_ordering(202) 00:08:57.138 fused_ordering(203) 00:08:57.138 fused_ordering(204) 00:08:57.138 fused_ordering(205) 00:08:57.704 fused_ordering(206) 00:08:57.704 fused_ordering(207) 00:08:57.704 fused_ordering(208) 00:08:57.704 fused_ordering(209) 00:08:57.704 fused_ordering(210) 00:08:57.704 fused_ordering(211) 00:08:57.704 fused_ordering(212) 00:08:57.704 fused_ordering(213) 00:08:57.704 fused_ordering(214) 00:08:57.704 fused_ordering(215) 00:08:57.704 fused_ordering(216) 00:08:57.704 fused_ordering(217) 00:08:57.704 fused_ordering(218) 00:08:57.704 fused_ordering(219) 00:08:57.704 fused_ordering(220) 00:08:57.704 fused_ordering(221) 00:08:57.704 fused_ordering(222) 00:08:57.704 fused_ordering(223) 00:08:57.704 fused_ordering(224) 00:08:57.704 fused_ordering(225) 00:08:57.704 fused_ordering(226) 00:08:57.704 fused_ordering(227) 00:08:57.704 fused_ordering(228) 00:08:57.704 fused_ordering(229) 00:08:57.704 fused_ordering(230) 00:08:57.704 fused_ordering(231) 00:08:57.704 fused_ordering(232) 00:08:57.704 fused_ordering(233) 00:08:57.704 fused_ordering(234) 00:08:57.704 fused_ordering(235) 00:08:57.704 fused_ordering(236) 00:08:57.704 fused_ordering(237) 00:08:57.704 fused_ordering(238) 00:08:57.704 fused_ordering(239) 00:08:57.704 fused_ordering(240) 00:08:57.704 fused_ordering(241) 00:08:57.704 fused_ordering(242) 00:08:57.704 fused_ordering(243) 00:08:57.704 fused_ordering(244) 00:08:57.704 fused_ordering(245) 00:08:57.704 fused_ordering(246) 00:08:57.704 fused_ordering(247) 00:08:57.704 fused_ordering(248) 00:08:57.704 fused_ordering(249) 00:08:57.704 fused_ordering(250) 00:08:57.704 fused_ordering(251) 00:08:57.704 fused_ordering(252) 00:08:57.704 fused_ordering(253) 00:08:57.704 fused_ordering(254) 00:08:57.704 fused_ordering(255) 00:08:57.704 fused_ordering(256) 00:08:57.704 fused_ordering(257) 00:08:57.704 fused_ordering(258) 00:08:57.704 fused_ordering(259) 00:08:57.704 fused_ordering(260) 00:08:57.704 fused_ordering(261) 00:08:57.704 fused_ordering(262) 00:08:57.704 fused_ordering(263) 00:08:57.704 fused_ordering(264) 00:08:57.704 fused_ordering(265) 00:08:57.704 fused_ordering(266) 00:08:57.704 fused_ordering(267) 00:08:57.704 fused_ordering(268) 00:08:57.704 fused_ordering(269) 00:08:57.704 fused_ordering(270) 00:08:57.704 fused_ordering(271) 00:08:57.704 fused_ordering(272) 00:08:57.704 fused_ordering(273) 00:08:57.704 fused_ordering(274) 00:08:57.704 fused_ordering(275) 00:08:57.704 fused_ordering(276) 00:08:57.704 fused_ordering(277) 00:08:57.704 fused_ordering(278) 00:08:57.704 fused_ordering(279) 00:08:57.704 fused_ordering(280) 00:08:57.704 fused_ordering(281) 00:08:57.704 fused_ordering(282) 00:08:57.704 fused_ordering(283) 00:08:57.704 fused_ordering(284) 00:08:57.704 fused_ordering(285) 00:08:57.704 fused_ordering(286) 00:08:57.704 fused_ordering(287) 00:08:57.704 fused_ordering(288) 00:08:57.704 fused_ordering(289) 00:08:57.704 fused_ordering(290) 00:08:57.704 fused_ordering(291) 00:08:57.704 fused_ordering(292) 00:08:57.704 fused_ordering(293) 00:08:57.704 fused_ordering(294) 00:08:57.704 fused_ordering(295) 00:08:57.704 fused_ordering(296) 00:08:57.704 fused_ordering(297) 00:08:57.704 fused_ordering(298) 00:08:57.704 fused_ordering(299) 00:08:57.704 fused_ordering(300) 00:08:57.704 fused_ordering(301) 00:08:57.704 fused_ordering(302) 00:08:57.704 fused_ordering(303) 00:08:57.704 fused_ordering(304) 00:08:57.704 fused_ordering(305) 00:08:57.704 fused_ordering(306) 00:08:57.704 fused_ordering(307) 00:08:57.704 fused_ordering(308) 00:08:57.704 fused_ordering(309) 00:08:57.704 fused_ordering(310) 00:08:57.704 fused_ordering(311) 00:08:57.704 fused_ordering(312) 00:08:57.704 fused_ordering(313) 00:08:57.704 fused_ordering(314) 00:08:57.704 fused_ordering(315) 00:08:57.704 fused_ordering(316) 00:08:57.704 fused_ordering(317) 00:08:57.704 fused_ordering(318) 00:08:57.704 fused_ordering(319) 00:08:57.704 fused_ordering(320) 00:08:57.704 fused_ordering(321) 00:08:57.704 fused_ordering(322) 00:08:57.705 fused_ordering(323) 00:08:57.705 fused_ordering(324) 00:08:57.705 fused_ordering(325) 00:08:57.705 fused_ordering(326) 00:08:57.705 fused_ordering(327) 00:08:57.705 fused_ordering(328) 00:08:57.705 fused_ordering(329) 00:08:57.705 fused_ordering(330) 00:08:57.705 fused_ordering(331) 00:08:57.705 fused_ordering(332) 00:08:57.705 fused_ordering(333) 00:08:57.705 fused_ordering(334) 00:08:57.705 fused_ordering(335) 00:08:57.705 fused_ordering(336) 00:08:57.705 fused_ordering(337) 00:08:57.705 fused_ordering(338) 00:08:57.705 fused_ordering(339) 00:08:57.705 fused_ordering(340) 00:08:57.705 fused_ordering(341) 00:08:57.705 fused_ordering(342) 00:08:57.705 fused_ordering(343) 00:08:57.705 fused_ordering(344) 00:08:57.705 fused_ordering(345) 00:08:57.705 fused_ordering(346) 00:08:57.705 fused_ordering(347) 00:08:57.705 fused_ordering(348) 00:08:57.705 fused_ordering(349) 00:08:57.705 fused_ordering(350) 00:08:57.705 fused_ordering(351) 00:08:57.705 fused_ordering(352) 00:08:57.705 fused_ordering(353) 00:08:57.705 fused_ordering(354) 00:08:57.705 fused_ordering(355) 00:08:57.705 fused_ordering(356) 00:08:57.705 fused_ordering(357) 00:08:57.705 fused_ordering(358) 00:08:57.705 fused_ordering(359) 00:08:57.705 fused_ordering(360) 00:08:57.705 fused_ordering(361) 00:08:57.705 fused_ordering(362) 00:08:57.705 fused_ordering(363) 00:08:57.705 fused_ordering(364) 00:08:57.705 fused_ordering(365) 00:08:57.705 fused_ordering(366) 00:08:57.705 fused_ordering(367) 00:08:57.705 fused_ordering(368) 00:08:57.705 fused_ordering(369) 00:08:57.705 fused_ordering(370) 00:08:57.705 fused_ordering(371) 00:08:57.705 fused_ordering(372) 00:08:57.705 fused_ordering(373) 00:08:57.705 fused_ordering(374) 00:08:57.705 fused_ordering(375) 00:08:57.705 fused_ordering(376) 00:08:57.705 fused_ordering(377) 00:08:57.705 fused_ordering(378) 00:08:57.705 fused_ordering(379) 00:08:57.705 fused_ordering(380) 00:08:57.705 fused_ordering(381) 00:08:57.705 fused_ordering(382) 00:08:57.705 fused_ordering(383) 00:08:57.705 fused_ordering(384) 00:08:57.705 fused_ordering(385) 00:08:57.705 fused_ordering(386) 00:08:57.705 fused_ordering(387) 00:08:57.705 fused_ordering(388) 00:08:57.705 fused_ordering(389) 00:08:57.705 fused_ordering(390) 00:08:57.705 fused_ordering(391) 00:08:57.705 fused_ordering(392) 00:08:57.705 fused_ordering(393) 00:08:57.705 fused_ordering(394) 00:08:57.705 fused_ordering(395) 00:08:57.705 fused_ordering(396) 00:08:57.705 fused_ordering(397) 00:08:57.705 fused_ordering(398) 00:08:57.705 fused_ordering(399) 00:08:57.705 fused_ordering(400) 00:08:57.705 fused_ordering(401) 00:08:57.705 fused_ordering(402) 00:08:57.705 fused_ordering(403) 00:08:57.705 fused_ordering(404) 00:08:57.705 fused_ordering(405) 00:08:57.705 fused_ordering(406) 00:08:57.705 fused_ordering(407) 00:08:57.705 fused_ordering(408) 00:08:57.705 fused_ordering(409) 00:08:57.705 fused_ordering(410) 00:08:58.270 fused_ordering(411) 00:08:58.270 fused_ordering(412) 00:08:58.270 fused_ordering(413) 00:08:58.270 fused_ordering(414) 00:08:58.270 fused_ordering(415) 00:08:58.270 fused_ordering(416) 00:08:58.270 fused_ordering(417) 00:08:58.270 fused_ordering(418) 00:08:58.270 fused_ordering(419) 00:08:58.270 fused_ordering(420) 00:08:58.270 fused_ordering(421) 00:08:58.270 fused_ordering(422) 00:08:58.270 fused_ordering(423) 00:08:58.270 fused_ordering(424) 00:08:58.270 fused_ordering(425) 00:08:58.270 fused_ordering(426) 00:08:58.270 fused_ordering(427) 00:08:58.270 fused_ordering(428) 00:08:58.270 fused_ordering(429) 00:08:58.270 fused_ordering(430) 00:08:58.270 fused_ordering(431) 00:08:58.270 fused_ordering(432) 00:08:58.270 fused_ordering(433) 00:08:58.270 fused_ordering(434) 00:08:58.270 fused_ordering(435) 00:08:58.270 fused_ordering(436) 00:08:58.270 fused_ordering(437) 00:08:58.270 fused_ordering(438) 00:08:58.270 fused_ordering(439) 00:08:58.270 fused_ordering(440) 00:08:58.270 fused_ordering(441) 00:08:58.270 fused_ordering(442) 00:08:58.270 fused_ordering(443) 00:08:58.270 fused_ordering(444) 00:08:58.270 fused_ordering(445) 00:08:58.270 fused_ordering(446) 00:08:58.270 fused_ordering(447) 00:08:58.270 fused_ordering(448) 00:08:58.270 fused_ordering(449) 00:08:58.270 fused_ordering(450) 00:08:58.270 fused_ordering(451) 00:08:58.270 fused_ordering(452) 00:08:58.270 fused_ordering(453) 00:08:58.270 fused_ordering(454) 00:08:58.270 fused_ordering(455) 00:08:58.270 fused_ordering(456) 00:08:58.270 fused_ordering(457) 00:08:58.270 fused_ordering(458) 00:08:58.270 fused_ordering(459) 00:08:58.270 fused_ordering(460) 00:08:58.270 fused_ordering(461) 00:08:58.270 fused_ordering(462) 00:08:58.270 fused_ordering(463) 00:08:58.270 fused_ordering(464) 00:08:58.270 fused_ordering(465) 00:08:58.270 fused_ordering(466) 00:08:58.270 fused_ordering(467) 00:08:58.270 fused_ordering(468) 00:08:58.270 fused_ordering(469) 00:08:58.270 fused_ordering(470) 00:08:58.270 fused_ordering(471) 00:08:58.270 fused_ordering(472) 00:08:58.270 fused_ordering(473) 00:08:58.270 fused_ordering(474) 00:08:58.270 fused_ordering(475) 00:08:58.270 fused_ordering(476) 00:08:58.270 fused_ordering(477) 00:08:58.270 fused_ordering(478) 00:08:58.270 fused_ordering(479) 00:08:58.270 fused_ordering(480) 00:08:58.270 fused_ordering(481) 00:08:58.270 fused_ordering(482) 00:08:58.270 fused_ordering(483) 00:08:58.270 fused_ordering(484) 00:08:58.270 fused_ordering(485) 00:08:58.270 fused_ordering(486) 00:08:58.270 fused_ordering(487) 00:08:58.270 fused_ordering(488) 00:08:58.270 fused_ordering(489) 00:08:58.270 fused_ordering(490) 00:08:58.270 fused_ordering(491) 00:08:58.270 fused_ordering(492) 00:08:58.270 fused_ordering(493) 00:08:58.270 fused_ordering(494) 00:08:58.270 fused_ordering(495) 00:08:58.270 fused_ordering(496) 00:08:58.270 fused_ordering(497) 00:08:58.270 fused_ordering(498) 00:08:58.270 fused_ordering(499) 00:08:58.270 fused_ordering(500) 00:08:58.270 fused_ordering(501) 00:08:58.270 fused_ordering(502) 00:08:58.270 fused_ordering(503) 00:08:58.270 fused_ordering(504) 00:08:58.270 fused_ordering(505) 00:08:58.270 fused_ordering(506) 00:08:58.270 fused_ordering(507) 00:08:58.270 fused_ordering(508) 00:08:58.270 fused_ordering(509) 00:08:58.270 fused_ordering(510) 00:08:58.270 fused_ordering(511) 00:08:58.270 fused_ordering(512) 00:08:58.270 fused_ordering(513) 00:08:58.270 fused_ordering(514) 00:08:58.270 fused_ordering(515) 00:08:58.270 fused_ordering(516) 00:08:58.270 fused_ordering(517) 00:08:58.270 fused_ordering(518) 00:08:58.270 fused_ordering(519) 00:08:58.270 fused_ordering(520) 00:08:58.270 fused_ordering(521) 00:08:58.270 fused_ordering(522) 00:08:58.270 fused_ordering(523) 00:08:58.270 fused_ordering(524) 00:08:58.270 fused_ordering(525) 00:08:58.270 fused_ordering(526) 00:08:58.270 fused_ordering(527) 00:08:58.270 fused_ordering(528) 00:08:58.270 fused_ordering(529) 00:08:58.270 fused_ordering(530) 00:08:58.270 fused_ordering(531) 00:08:58.270 fused_ordering(532) 00:08:58.270 fused_ordering(533) 00:08:58.270 fused_ordering(534) 00:08:58.270 fused_ordering(535) 00:08:58.270 fused_ordering(536) 00:08:58.270 fused_ordering(537) 00:08:58.270 fused_ordering(538) 00:08:58.270 fused_ordering(539) 00:08:58.270 fused_ordering(540) 00:08:58.270 fused_ordering(541) 00:08:58.270 fused_ordering(542) 00:08:58.270 fused_ordering(543) 00:08:58.270 fused_ordering(544) 00:08:58.270 fused_ordering(545) 00:08:58.270 fused_ordering(546) 00:08:58.270 fused_ordering(547) 00:08:58.270 fused_ordering(548) 00:08:58.270 fused_ordering(549) 00:08:58.270 fused_ordering(550) 00:08:58.270 fused_ordering(551) 00:08:58.270 fused_ordering(552) 00:08:58.270 fused_ordering(553) 00:08:58.270 fused_ordering(554) 00:08:58.270 fused_ordering(555) 00:08:58.270 fused_ordering(556) 00:08:58.270 fused_ordering(557) 00:08:58.270 fused_ordering(558) 00:08:58.270 fused_ordering(559) 00:08:58.270 fused_ordering(560) 00:08:58.270 fused_ordering(561) 00:08:58.270 fused_ordering(562) 00:08:58.270 fused_ordering(563) 00:08:58.270 fused_ordering(564) 00:08:58.270 fused_ordering(565) 00:08:58.270 fused_ordering(566) 00:08:58.270 fused_ordering(567) 00:08:58.270 fused_ordering(568) 00:08:58.271 fused_ordering(569) 00:08:58.271 fused_ordering(570) 00:08:58.271 fused_ordering(571) 00:08:58.271 fused_ordering(572) 00:08:58.271 fused_ordering(573) 00:08:58.271 fused_ordering(574) 00:08:58.271 fused_ordering(575) 00:08:58.271 fused_ordering(576) 00:08:58.271 fused_ordering(577) 00:08:58.271 fused_ordering(578) 00:08:58.271 fused_ordering(579) 00:08:58.271 fused_ordering(580) 00:08:58.271 fused_ordering(581) 00:08:58.271 fused_ordering(582) 00:08:58.271 fused_ordering(583) 00:08:58.271 fused_ordering(584) 00:08:58.271 fused_ordering(585) 00:08:58.271 fused_ordering(586) 00:08:58.271 fused_ordering(587) 00:08:58.271 fused_ordering(588) 00:08:58.271 fused_ordering(589) 00:08:58.271 fused_ordering(590) 00:08:58.271 fused_ordering(591) 00:08:58.271 fused_ordering(592) 00:08:58.271 fused_ordering(593) 00:08:58.271 fused_ordering(594) 00:08:58.271 fused_ordering(595) 00:08:58.271 fused_ordering(596) 00:08:58.271 fused_ordering(597) 00:08:58.271 fused_ordering(598) 00:08:58.271 fused_ordering(599) 00:08:58.271 fused_ordering(600) 00:08:58.271 fused_ordering(601) 00:08:58.271 fused_ordering(602) 00:08:58.271 fused_ordering(603) 00:08:58.271 fused_ordering(604) 00:08:58.271 fused_ordering(605) 00:08:58.271 fused_ordering(606) 00:08:58.271 fused_ordering(607) 00:08:58.271 fused_ordering(608) 00:08:58.271 fused_ordering(609) 00:08:58.271 fused_ordering(610) 00:08:58.271 fused_ordering(611) 00:08:58.271 fused_ordering(612) 00:08:58.271 fused_ordering(613) 00:08:58.271 fused_ordering(614) 00:08:58.271 fused_ordering(615) 00:08:58.836 fused_ordering(616) 00:08:58.836 fused_ordering(617) 00:08:58.836 fused_ordering(618) 00:08:58.836 fused_ordering(619) 00:08:58.836 fused_ordering(620) 00:08:58.836 fused_ordering(621) 00:08:58.836 fused_ordering(622) 00:08:58.836 fused_ordering(623) 00:08:58.836 fused_ordering(624) 00:08:58.836 fused_ordering(625) 00:08:58.836 fused_ordering(626) 00:08:58.836 fused_ordering(627) 00:08:58.836 fused_ordering(628) 00:08:58.836 fused_ordering(629) 00:08:58.836 fused_ordering(630) 00:08:58.836 fused_ordering(631) 00:08:58.836 fused_ordering(632) 00:08:58.836 fused_ordering(633) 00:08:58.836 fused_ordering(634) 00:08:58.836 fused_ordering(635) 00:08:58.836 fused_ordering(636) 00:08:58.836 fused_ordering(637) 00:08:58.836 fused_ordering(638) 00:08:58.836 fused_ordering(639) 00:08:58.836 fused_ordering(640) 00:08:58.836 fused_ordering(641) 00:08:58.836 fused_ordering(642) 00:08:58.836 fused_ordering(643) 00:08:58.836 fused_ordering(644) 00:08:58.837 fused_ordering(645) 00:08:58.837 fused_ordering(646) 00:08:58.837 fused_ordering(647) 00:08:58.837 fused_ordering(648) 00:08:58.837 fused_ordering(649) 00:08:58.837 fused_ordering(650) 00:08:58.837 fused_ordering(651) 00:08:58.837 fused_ordering(652) 00:08:58.837 fused_ordering(653) 00:08:58.837 fused_ordering(654) 00:08:58.837 fused_ordering(655) 00:08:58.837 fused_ordering(656) 00:08:58.837 fused_ordering(657) 00:08:58.837 fused_ordering(658) 00:08:58.837 fused_ordering(659) 00:08:58.837 fused_ordering(660) 00:08:58.837 fused_ordering(661) 00:08:58.837 fused_ordering(662) 00:08:58.837 fused_ordering(663) 00:08:58.837 fused_ordering(664) 00:08:58.837 fused_ordering(665) 00:08:58.837 fused_ordering(666) 00:08:58.837 fused_ordering(667) 00:08:58.837 fused_ordering(668) 00:08:58.837 fused_ordering(669) 00:08:58.837 fused_ordering(670) 00:08:58.837 fused_ordering(671) 00:08:58.837 fused_ordering(672) 00:08:58.837 fused_ordering(673) 00:08:58.837 fused_ordering(674) 00:08:58.837 fused_ordering(675) 00:08:58.837 fused_ordering(676) 00:08:58.837 fused_ordering(677) 00:08:58.837 fused_ordering(678) 00:08:58.837 fused_ordering(679) 00:08:58.837 fused_ordering(680) 00:08:58.837 fused_ordering(681) 00:08:58.837 fused_ordering(682) 00:08:58.837 fused_ordering(683) 00:08:58.837 fused_ordering(684) 00:08:58.837 fused_ordering(685) 00:08:58.837 fused_ordering(686) 00:08:58.837 fused_ordering(687) 00:08:58.837 fused_ordering(688) 00:08:58.837 fused_ordering(689) 00:08:58.837 fused_ordering(690) 00:08:58.837 fused_ordering(691) 00:08:58.837 fused_ordering(692) 00:08:58.837 fused_ordering(693) 00:08:58.837 fused_ordering(694) 00:08:58.837 fused_ordering(695) 00:08:58.837 fused_ordering(696) 00:08:58.837 fused_ordering(697) 00:08:58.837 fused_ordering(698) 00:08:58.837 fused_ordering(699) 00:08:58.837 fused_ordering(700) 00:08:58.837 fused_ordering(701) 00:08:58.837 fused_ordering(702) 00:08:58.837 fused_ordering(703) 00:08:58.837 fused_ordering(704) 00:08:58.837 fused_ordering(705) 00:08:58.837 fused_ordering(706) 00:08:58.837 fused_ordering(707) 00:08:58.837 fused_ordering(708) 00:08:58.837 fused_ordering(709) 00:08:58.837 fused_ordering(710) 00:08:58.837 fused_ordering(711) 00:08:58.837 fused_ordering(712) 00:08:58.837 fused_ordering(713) 00:08:58.837 fused_ordering(714) 00:08:58.837 fused_ordering(715) 00:08:58.837 fused_ordering(716) 00:08:58.837 fused_ordering(717) 00:08:58.837 fused_ordering(718) 00:08:58.837 fused_ordering(719) 00:08:58.837 fused_ordering(720) 00:08:58.837 fused_ordering(721) 00:08:58.837 fused_ordering(722) 00:08:58.837 fused_ordering(723) 00:08:58.837 fused_ordering(724) 00:08:58.837 fused_ordering(725) 00:08:58.837 fused_ordering(726) 00:08:58.837 fused_ordering(727) 00:08:58.837 fused_ordering(728) 00:08:58.837 fused_ordering(729) 00:08:58.837 fused_ordering(730) 00:08:58.837 fused_ordering(731) 00:08:58.837 fused_ordering(732) 00:08:58.837 fused_ordering(733) 00:08:58.837 fused_ordering(734) 00:08:58.837 fused_ordering(735) 00:08:58.837 fused_ordering(736) 00:08:58.837 fused_ordering(737) 00:08:58.837 fused_ordering(738) 00:08:58.837 fused_ordering(739) 00:08:58.837 fused_ordering(740) 00:08:58.837 fused_ordering(741) 00:08:58.837 fused_ordering(742) 00:08:58.837 fused_ordering(743) 00:08:58.837 fused_ordering(744) 00:08:58.837 fused_ordering(745) 00:08:58.837 fused_ordering(746) 00:08:58.837 fused_ordering(747) 00:08:58.837 fused_ordering(748) 00:08:58.837 fused_ordering(749) 00:08:58.837 fused_ordering(750) 00:08:58.837 fused_ordering(751) 00:08:58.837 fused_ordering(752) 00:08:58.837 fused_ordering(753) 00:08:58.837 fused_ordering(754) 00:08:58.837 fused_ordering(755) 00:08:58.837 fused_ordering(756) 00:08:58.837 fused_ordering(757) 00:08:58.837 fused_ordering(758) 00:08:58.837 fused_ordering(759) 00:08:58.837 fused_ordering(760) 00:08:58.837 fused_ordering(761) 00:08:58.837 fused_ordering(762) 00:08:58.837 fused_ordering(763) 00:08:58.837 fused_ordering(764) 00:08:58.837 fused_ordering(765) 00:08:58.837 fused_ordering(766) 00:08:58.837 fused_ordering(767) 00:08:58.837 fused_ordering(768) 00:08:58.837 fused_ordering(769) 00:08:58.837 fused_ordering(770) 00:08:58.837 fused_ordering(771) 00:08:58.837 fused_ordering(772) 00:08:58.837 fused_ordering(773) 00:08:58.837 fused_ordering(774) 00:08:58.837 fused_ordering(775) 00:08:58.837 fused_ordering(776) 00:08:58.837 fused_ordering(777) 00:08:58.837 fused_ordering(778) 00:08:58.837 fused_ordering(779) 00:08:58.837 fused_ordering(780) 00:08:58.837 fused_ordering(781) 00:08:58.837 fused_ordering(782) 00:08:58.837 fused_ordering(783) 00:08:58.837 fused_ordering(784) 00:08:58.837 fused_ordering(785) 00:08:58.837 fused_ordering(786) 00:08:58.837 fused_ordering(787) 00:08:58.837 fused_ordering(788) 00:08:58.837 fused_ordering(789) 00:08:58.837 fused_ordering(790) 00:08:58.837 fused_ordering(791) 00:08:58.837 fused_ordering(792) 00:08:58.837 fused_ordering(793) 00:08:58.837 fused_ordering(794) 00:08:58.837 fused_ordering(795) 00:08:58.837 fused_ordering(796) 00:08:58.837 fused_ordering(797) 00:08:58.837 fused_ordering(798) 00:08:58.837 fused_ordering(799) 00:08:58.837 fused_ordering(800) 00:08:58.837 fused_ordering(801) 00:08:58.837 fused_ordering(802) 00:08:58.837 fused_ordering(803) 00:08:58.837 fused_ordering(804) 00:08:58.837 fused_ordering(805) 00:08:58.837 fused_ordering(806) 00:08:58.837 fused_ordering(807) 00:08:58.837 fused_ordering(808) 00:08:58.837 fused_ordering(809) 00:08:58.837 fused_ordering(810) 00:08:58.837 fused_ordering(811) 00:08:58.837 fused_ordering(812) 00:08:58.837 fused_ordering(813) 00:08:58.837 fused_ordering(814) 00:08:58.837 fused_ordering(815) 00:08:58.837 fused_ordering(816) 00:08:58.837 fused_ordering(817) 00:08:58.837 fused_ordering(818) 00:08:58.837 fused_ordering(819) 00:08:58.837 fused_ordering(820) 00:08:59.771 fused_ordering(821) 00:08:59.771 fused_ordering(822) 00:08:59.771 fused_ordering(823) 00:08:59.771 fused_ordering(824) 00:08:59.771 fused_ordering(825) 00:08:59.771 fused_ordering(826) 00:08:59.771 fused_ordering(827) 00:08:59.771 fused_ordering(828) 00:08:59.771 fused_ordering(829) 00:08:59.771 fused_ordering(830) 00:08:59.771 fused_ordering(831) 00:08:59.771 fused_ordering(832) 00:08:59.771 fused_ordering(833) 00:08:59.771 fused_ordering(834) 00:08:59.771 fused_ordering(835) 00:08:59.771 fused_ordering(836) 00:08:59.771 fused_ordering(837) 00:08:59.771 fused_ordering(838) 00:08:59.771 fused_ordering(839) 00:08:59.771 fused_ordering(840) 00:08:59.771 fused_ordering(841) 00:08:59.771 fused_ordering(842) 00:08:59.771 fused_ordering(843) 00:08:59.771 fused_ordering(844) 00:08:59.771 fused_ordering(845) 00:08:59.771 fused_ordering(846) 00:08:59.771 fused_ordering(847) 00:08:59.771 fused_ordering(848) 00:08:59.771 fused_ordering(849) 00:08:59.771 fused_ordering(850) 00:08:59.771 fused_ordering(851) 00:08:59.771 fused_ordering(852) 00:08:59.771 fused_ordering(853) 00:08:59.771 fused_ordering(854) 00:08:59.771 fused_ordering(855) 00:08:59.771 fused_ordering(856) 00:08:59.771 fused_ordering(857) 00:08:59.771 fused_ordering(858) 00:08:59.771 fused_ordering(859) 00:08:59.771 fused_ordering(860) 00:08:59.771 fused_ordering(861) 00:08:59.771 fused_ordering(862) 00:08:59.771 fused_ordering(863) 00:08:59.771 fused_ordering(864) 00:08:59.771 fused_ordering(865) 00:08:59.771 fused_ordering(866) 00:08:59.771 fused_ordering(867) 00:08:59.771 fused_ordering(868) 00:08:59.771 fused_ordering(869) 00:08:59.771 fused_ordering(870) 00:08:59.771 fused_ordering(871) 00:08:59.771 fused_ordering(872) 00:08:59.771 fused_ordering(873) 00:08:59.771 fused_ordering(874) 00:08:59.771 fused_ordering(875) 00:08:59.771 fused_ordering(876) 00:08:59.771 fused_ordering(877) 00:08:59.771 fused_ordering(878) 00:08:59.771 fused_ordering(879) 00:08:59.771 fused_ordering(880) 00:08:59.771 fused_ordering(881) 00:08:59.771 fused_ordering(882) 00:08:59.771 fused_ordering(883) 00:08:59.771 fused_ordering(884) 00:08:59.771 fused_ordering(885) 00:08:59.771 fused_ordering(886) 00:08:59.771 fused_ordering(887) 00:08:59.771 fused_ordering(888) 00:08:59.771 fused_ordering(889) 00:08:59.771 fused_ordering(890) 00:08:59.771 fused_ordering(891) 00:08:59.771 fused_ordering(892) 00:08:59.771 fused_ordering(893) 00:08:59.771 fused_ordering(894) 00:08:59.771 fused_ordering(895) 00:08:59.771 fused_ordering(896) 00:08:59.771 fused_ordering(897) 00:08:59.771 fused_ordering(898) 00:08:59.771 fused_ordering(899) 00:08:59.771 fused_ordering(900) 00:08:59.771 fused_ordering(901) 00:08:59.771 fused_ordering(902) 00:08:59.771 fused_ordering(903) 00:08:59.771 fused_ordering(904) 00:08:59.771 fused_ordering(905) 00:08:59.771 fused_ordering(906) 00:08:59.771 fused_ordering(907) 00:08:59.771 fused_ordering(908) 00:08:59.771 fused_ordering(909) 00:08:59.771 fused_ordering(910) 00:08:59.771 fused_ordering(911) 00:08:59.771 fused_ordering(912) 00:08:59.771 fused_ordering(913) 00:08:59.771 fused_ordering(914) 00:08:59.771 fused_ordering(915) 00:08:59.771 fused_ordering(916) 00:08:59.771 fused_ordering(917) 00:08:59.771 fused_ordering(918) 00:08:59.771 fused_ordering(919) 00:08:59.771 fused_ordering(920) 00:08:59.771 fused_ordering(921) 00:08:59.771 fused_ordering(922) 00:08:59.771 fused_ordering(923) 00:08:59.771 fused_ordering(924) 00:08:59.771 fused_ordering(925) 00:08:59.771 fused_ordering(926) 00:08:59.771 fused_ordering(927) 00:08:59.771 fused_ordering(928) 00:08:59.771 fused_ordering(929) 00:08:59.771 fused_ordering(930) 00:08:59.771 fused_ordering(931) 00:08:59.771 fused_ordering(932) 00:08:59.771 fused_ordering(933) 00:08:59.771 fused_ordering(934) 00:08:59.771 fused_ordering(935) 00:08:59.771 fused_ordering(936) 00:08:59.772 fused_ordering(937) 00:08:59.772 fused_ordering(938) 00:08:59.772 fused_ordering(939) 00:08:59.772 fused_ordering(940) 00:08:59.772 fused_ordering(941) 00:08:59.772 fused_ordering(942) 00:08:59.772 fused_ordering(943) 00:08:59.772 fused_ordering(944) 00:08:59.772 fused_ordering(945) 00:08:59.772 fused_ordering(946) 00:08:59.772 fused_ordering(947) 00:08:59.772 fused_ordering(948) 00:08:59.772 fused_ordering(949) 00:08:59.772 fused_ordering(950) 00:08:59.772 fused_ordering(951) 00:08:59.772 fused_ordering(952) 00:08:59.772 fused_ordering(953) 00:08:59.772 fused_ordering(954) 00:08:59.772 fused_ordering(955) 00:08:59.772 fused_ordering(956) 00:08:59.772 fused_ordering(957) 00:08:59.772 fused_ordering(958) 00:08:59.772 fused_ordering(959) 00:08:59.772 fused_ordering(960) 00:08:59.772 fused_ordering(961) 00:08:59.772 fused_ordering(962) 00:08:59.772 fused_ordering(963) 00:08:59.772 fused_ordering(964) 00:08:59.772 fused_ordering(965) 00:08:59.772 fused_ordering(966) 00:08:59.772 fused_ordering(967) 00:08:59.772 fused_ordering(968) 00:08:59.772 fused_ordering(969) 00:08:59.772 fused_ordering(970) 00:08:59.772 fused_ordering(971) 00:08:59.772 fused_ordering(972) 00:08:59.772 fused_ordering(973) 00:08:59.772 fused_ordering(974) 00:08:59.772 fused_ordering(975) 00:08:59.772 fused_ordering(976) 00:08:59.772 fused_ordering(977) 00:08:59.772 fused_ordering(978) 00:08:59.772 fused_ordering(979) 00:08:59.772 fused_ordering(980) 00:08:59.772 fused_ordering(981) 00:08:59.772 fused_ordering(982) 00:08:59.772 fused_ordering(983) 00:08:59.772 fused_ordering(984) 00:08:59.772 fused_ordering(985) 00:08:59.772 fused_ordering(986) 00:08:59.772 fused_ordering(987) 00:08:59.772 fused_ordering(988) 00:08:59.772 fused_ordering(989) 00:08:59.772 fused_ordering(990) 00:08:59.772 fused_ordering(991) 00:08:59.772 fused_ordering(992) 00:08:59.772 fused_ordering(993) 00:08:59.772 fused_ordering(994) 00:08:59.772 fused_ordering(995) 00:08:59.772 fused_ordering(996) 00:08:59.772 fused_ordering(997) 00:08:59.772 fused_ordering(998) 00:08:59.772 fused_ordering(999) 00:08:59.772 fused_ordering(1000) 00:08:59.772 fused_ordering(1001) 00:08:59.772 fused_ordering(1002) 00:08:59.772 fused_ordering(1003) 00:08:59.772 fused_ordering(1004) 00:08:59.772 fused_ordering(1005) 00:08:59.772 fused_ordering(1006) 00:08:59.772 fused_ordering(1007) 00:08:59.772 fused_ordering(1008) 00:08:59.772 fused_ordering(1009) 00:08:59.772 fused_ordering(1010) 00:08:59.772 fused_ordering(1011) 00:08:59.772 fused_ordering(1012) 00:08:59.772 fused_ordering(1013) 00:08:59.772 fused_ordering(1014) 00:08:59.772 fused_ordering(1015) 00:08:59.772 fused_ordering(1016) 00:08:59.772 fused_ordering(1017) 00:08:59.772 fused_ordering(1018) 00:08:59.772 fused_ordering(1019) 00:08:59.772 fused_ordering(1020) 00:08:59.772 fused_ordering(1021) 00:08:59.772 fused_ordering(1022) 00:08:59.772 fused_ordering(1023) 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.772 rmmod nvme_tcp 00:08:59.772 rmmod nvme_fabrics 00:08:59.772 rmmod nvme_keyring 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1447417 ']' 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1447417 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1447417 ']' 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1447417 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1447417 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1447417' 00:08:59.772 killing process with pid 1447417 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1447417 00:08:59.772 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1447417 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.031 16:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.032 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.032 16:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.935 16:25:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.935 00:09:01.935 real 0m8.785s 00:09:01.935 user 0m6.530s 00:09:01.935 sys 0m3.908s 00:09:01.935 16:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.935 16:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.935 ************************************ 00:09:01.935 END TEST nvmf_fused_ordering 00:09:01.935 ************************************ 00:09:01.935 16:25:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.935 16:25:41 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:01.935 16:25:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.935 16:25:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.935 16:25:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.935 ************************************ 00:09:01.935 START TEST nvmf_delete_subsystem 00:09:01.935 ************************************ 00:09:01.935 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:02.226 * Looking for test storage... 00:09:02.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.226 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.227 16:25:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.150 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:04.151 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:04.151 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:04.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:04.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:09:04.151 00:09:04.151 --- 10.0.0.2 ping statistics --- 00:09:04.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.151 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:09:04.151 00:09:04.151 --- 10.0.0.1 ping statistics --- 00:09:04.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.151 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1449845 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1449845 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1449845 ']' 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.151 16:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.151 [2024-07-15 16:25:43.704401] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:04.151 [2024-07-15 16:25:43.704483] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.151 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.407 [2024-07-15 16:25:43.778848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.407 [2024-07-15 16:25:43.897004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.408 [2024-07-15 16:25:43.897050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.408 [2024-07-15 16:25:43.897074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.408 [2024-07-15 16:25:43.897086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.408 [2024-07-15 16:25:43.897097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.408 [2024-07-15 16:25:43.897155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.408 [2024-07-15 16:25:43.897173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 [2024-07-15 16:25:44.712286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 [2024-07-15 16:25:44.728459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:05.338 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.339 NULL1 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.339 Delay0 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1450002 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:05.339 16:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:05.339 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.339 [2024-07-15 16:25:44.803123] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:07.235 16:25:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.235 16:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.235 16:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 [2024-07-15 16:25:46.973444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd3e0 is same with the state(5) to be set 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 [2024-07-15 16:25:46.974117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd7a0 is same with the state(5) to be set 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 starting I/O failed: -6 00:09:07.492 [2024-07-15 16:25:46.974728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f993000d2f0 is same with the state(5) to be set 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Write completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:07.492 Read completed with error (sct=0, sc=8) 00:09:08.422 [2024-07-15 16:25:47.941892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbeac0 is same with the state(5) to be set 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 [2024-07-15 16:25:47.976319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f993000cfe0 is same with the state(5) to be set 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 [2024-07-15 16:25:47.977514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f993000d600 is same with the state(5) to be set 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.422 Write completed with error (sct=0, sc=8) 00:09:08.422 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 [2024-07-15 16:25:47.977936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd5c0 is same with the state(5) to be set 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Write completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 Read completed with error (sct=0, sc=8) 00:09:08.423 [2024-07-15 16:25:47.978119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd980 is same with the state(5) to be set 00:09:08.423 Initializing NVMe Controllers 00:09:08.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:08.423 Controller IO queue size 128, less than required. 00:09:08.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:08.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:08.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:08.423 Initialization complete. Launching workers. 00:09:08.423 ======================================================== 00:09:08.423 Latency(us) 00:09:08.423 Device Information : IOPS MiB/s Average min max 00:09:08.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.90 0.08 930623.44 694.69 2005183.63 00:09:08.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.90 0.08 945294.99 430.62 2003209.18 00:09:08.423 ======================================================== 00:09:08.423 Total : 330.80 0.16 937937.19 430.62 2005183.63 00:09:08.423 00:09:08.423 [2024-07-15 16:25:47.978928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbeac0 (9): Bad file descriptor 00:09:08.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:08.423 16:25:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.423 16:25:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:08.423 16:25:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1450002 00:09:08.423 16:25:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1450002 00:09:08.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1450002) - No such process 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1450002 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1450002 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1450002 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 [2024-07-15 16:25:48.501536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1450520 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:08.988 16:25:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.988 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.988 [2024-07-15 16:25:48.565276] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:09.553 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.553 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:09.553 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:10.117 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.117 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:10.117 16:25:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:10.682 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.682 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:10.682 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:10.939 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.939 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:10.939 16:25:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:11.503 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:11.503 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:11.503 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:12.067 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:12.067 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:12.067 16:25:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:12.324 Initializing NVMe Controllers 00:09:12.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.324 Controller IO queue size 128, less than required. 00:09:12.324 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:12.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:12.324 Initialization complete. Launching workers. 00:09:12.324 ======================================================== 00:09:12.324 Latency(us) 00:09:12.324 Device Information : IOPS MiB/s Average min max 00:09:12.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003787.72 1000247.58 1041378.62 00:09:12.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005673.63 1000333.65 1042606.45 00:09:12.324 ======================================================== 00:09:12.324 Total : 256.00 0.12 1004730.67 1000247.58 1042606.45 00:09:12.324 00:09:12.581 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1450520 00:09:12.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1450520) - No such process 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1450520 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.582 rmmod nvme_tcp 00:09:12.582 rmmod nvme_fabrics 00:09:12.582 rmmod nvme_keyring 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1449845 ']' 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1449845 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1449845 ']' 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1449845 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1449845 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1449845' 00:09:12.582 killing process with pid 1449845 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1449845 00:09:12.582 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1449845 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.840 16:25:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.373 16:25:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.373 00:09:15.373 real 0m12.925s 00:09:15.373 user 0m29.451s 00:09:15.373 sys 0m2.995s 00:09:15.373 16:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.373 16:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.373 ************************************ 00:09:15.373 END TEST nvmf_delete_subsystem 00:09:15.373 ************************************ 00:09:15.373 16:25:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.373 16:25:54 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:15.373 16:25:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.373 16:25:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.373 16:25:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.373 ************************************ 00:09:15.373 START TEST nvmf_ns_masking 00:09:15.373 ************************************ 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:15.373 * Looking for test storage... 00:09:15.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=aaae7e34-dd72-4782-b2be-7be5bcf6bd44 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fd8caa50-291e-454d-bea5-2f272f536443 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8991d91a-1950-4daf-be76-10a18246bf55 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.373 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.374 16:25:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.374 16:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:17.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:17.274 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:17.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.274 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:17.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:09:17.275 00:09:17.275 --- 10.0.0.2 ping statistics --- 00:09:17.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.275 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:17.275 00:09:17.275 --- 10.0.0.1 ping statistics --- 00:09:17.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.275 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1452873 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1452873 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1452873 ']' 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.275 16:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:17.275 [2024-07-15 16:25:56.739123] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:17.275 [2024-07-15 16:25:56.739205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.275 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.275 [2024-07-15 16:25:56.804841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.533 [2024-07-15 16:25:56.919692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.533 [2024-07-15 16:25:56.919754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.533 [2024-07-15 16:25:56.919771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.533 [2024-07-15 16:25:56.919784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.533 [2024-07-15 16:25:56.919795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.533 [2024-07-15 16:25:56.919824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:18.466 [2024-07-15 16:25:57.963603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:18.466 16:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:18.724 Malloc1 00:09:18.724 16:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:19.289 Malloc2 00:09:19.289 16:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.578 16:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:19.578 16:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.855 [2024-07-15 16:25:59.359460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.855 16:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:19.855 16:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8991d91a-1950-4daf-be76-10a18246bf55 -a 10.0.0.2 -s 4420 -i 4 00:09:20.113 16:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.113 16:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.113 16:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.113 16:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.113 16:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:22.067 [ 0]:0x1 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:22.067 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:22.324 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d0452fea1b44712ba58f0876d8615fb 00:09:22.324 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d0452fea1b44712ba58f0876d8615fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.324 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:22.324 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:22.581 [ 0]:0x1 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d0452fea1b44712ba58f0876d8615fb 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d0452fea1b44712ba58f0876d8615fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:22.581 [ 1]:0x2 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:22.581 16:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:22.581 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:22.581 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.581 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:22.581 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.581 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.839 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:23.096 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:23.096 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8991d91a-1950-4daf-be76-10a18246bf55 -a 10.0.0.2 -s 4420 -i 4 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:23.353 16:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:25.249 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:25.507 [ 0]:0x2 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.507 16:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:25.765 [ 0]:0x1 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d0452fea1b44712ba58f0876d8615fb 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d0452fea1b44712ba58f0876d8615fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:25.765 [ 1]:0x2 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.765 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.022 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.023 [ 0]:0x2 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.023 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.279 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:26.279 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.279 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:26.279 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.279 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:26.536 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:26.536 16:26:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8991d91a-1950-4daf-be76-10a18246bf55 -a 10.0.0.2 -s 4420 -i 4 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:26.793 16:26:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:28.690 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:28.946 [ 0]:0x1 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d0452fea1b44712ba58f0876d8615fb 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d0452fea1b44712ba58f0876d8615fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:28.946 [ 1]:0x2 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.946 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:29.203 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:29.461 [ 0]:0x2 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:29.461 16:26:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:29.718 [2024-07-15 16:26:09.173711] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:29.718 request: 00:09:29.718 { 00:09:29.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.718 "nsid": 2, 00:09:29.718 "host": "nqn.2016-06.io.spdk:host1", 00:09:29.718 "method": "nvmf_ns_remove_host", 00:09:29.718 "req_id": 1 00:09:29.718 } 00:09:29.718 Got JSON-RPC error response 00:09:29.718 response: 00:09:29.718 { 00:09:29.718 "code": -32602, 00:09:29.718 "message": "Invalid parameters" 00:09:29.718 } 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.718 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:29.719 [ 0]:0x2 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:29.719 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06c167b5650c4cdc9e859d802d2d9100 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06c167b5650c4cdc9e859d802d2d9100 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1454504 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1454504 /var/tmp/host.sock 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1454504 ']' 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:29.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.975 16:26:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 [2024-07-15 16:26:09.516316] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:29.975 [2024-07-15 16:26:09.516395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454504 ] 00:09:29.975 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.233 [2024-07-15 16:26:09.578753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.233 [2024-07-15 16:26:09.698707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.164 16:26:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.164 16:26:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:31.164 16:26:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.164 16:26:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.421 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid aaae7e34-dd72-4782-b2be-7be5bcf6bd44 00:09:31.421 16:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:31.421 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AAAE7E34DD724782B2BE7BE5BCF6BD44 -i 00:09:31.985 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fd8caa50-291e-454d-bea5-2f272f536443 00:09:31.985 16:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:31.985 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FD8CAA50291E454DBEA52F272F536443 -i 00:09:31.985 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:32.243 16:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:32.501 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:32.501 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:32.758 nvme0n1 00:09:33.016 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:33.016 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:33.273 nvme1n2 00:09:33.273 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:33.273 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:33.273 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:33.273 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:33.273 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:33.530 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:33.530 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:33.530 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:33.530 16:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:33.788 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ aaae7e34-dd72-4782-b2be-7be5bcf6bd44 == \a\a\a\e\7\e\3\4\-\d\d\7\2\-\4\7\8\2\-\b\2\b\e\-\7\b\e\5\b\c\f\6\b\d\4\4 ]] 00:09:33.788 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:33.788 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:33.788 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fd8caa50-291e-454d-bea5-2f272f536443 == \f\d\8\c\a\a\5\0\-\2\9\1\e\-\4\5\4\d\-\b\e\a\5\-\2\f\2\7\2\f\5\3\6\4\4\3 ]] 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1454504 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1454504 ']' 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1454504 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1454504 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1454504' 00:09:34.047 killing process with pid 1454504 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1454504 00:09:34.047 16:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1454504 00:09:34.610 16:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.867 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.867 rmmod nvme_tcp 00:09:34.867 rmmod nvme_fabrics 00:09:34.868 rmmod nvme_keyring 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1452873 ']' 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1452873 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1452873 ']' 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1452873 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1452873 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1452873' 00:09:34.868 killing process with pid 1452873 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1452873 00:09:34.868 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1452873 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.127 16:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.677 16:26:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.677 00:09:37.677 real 0m22.188s 00:09:37.677 user 0m29.301s 00:09:37.677 sys 0m4.184s 00:09:37.677 16:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.677 16:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:37.677 ************************************ 00:09:37.677 END TEST nvmf_ns_masking 00:09:37.677 ************************************ 00:09:37.677 16:26:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:37.677 16:26:16 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:37.677 16:26:16 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:37.677 16:26:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:37.677 16:26:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.677 16:26:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.677 ************************************ 00:09:37.677 START TEST nvmf_nvme_cli 00:09:37.677 ************************************ 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:37.677 * Looking for test storage... 00:09:37.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.677 16:26:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.678 16:26:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.578 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:39.579 00:09:39.579 --- 10.0.0.2 ping statistics --- 00:09:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.579 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:09:39.579 00:09:39.579 --- 10.0.0.1 ping statistics --- 00:09:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.579 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1457007 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.579 16:26:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1457007 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1457007 ']' 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.580 16:26:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:39.580 [2024-07-15 16:26:19.005856] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:39.580 [2024-07-15 16:26:19.005942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.580 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.580 [2024-07-15 16:26:19.074806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.837 [2024-07-15 16:26:19.200020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.837 [2024-07-15 16:26:19.200074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.837 [2024-07-15 16:26:19.200090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.837 [2024-07-15 16:26:19.200103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.837 [2024-07-15 16:26:19.200114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.837 [2024-07-15 16:26:19.200208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.837 [2024-07-15 16:26:19.200265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.837 [2024-07-15 16:26:19.200318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.837 [2024-07-15 16:26:19.200321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 [2024-07-15 16:26:20.037162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 Malloc0 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 Malloc1 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 [2024-07-15 16:26:20.122533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.768 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:40.768 00:09:40.769 Discovery Log Number of Records 2, Generation counter 2 00:09:40.769 =====Discovery Log Entry 0====== 00:09:40.769 trtype: tcp 00:09:40.769 adrfam: ipv4 00:09:40.769 subtype: current discovery subsystem 00:09:40.769 treq: not required 00:09:40.769 portid: 0 00:09:40.769 trsvcid: 4420 00:09:40.769 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:40.769 traddr: 10.0.0.2 00:09:40.769 eflags: explicit discovery connections, duplicate discovery information 00:09:40.769 sectype: none 00:09:40.769 =====Discovery Log Entry 1====== 00:09:40.769 trtype: tcp 00:09:40.769 adrfam: ipv4 00:09:40.769 subtype: nvme subsystem 00:09:40.769 treq: not required 00:09:40.769 portid: 0 00:09:40.769 trsvcid: 4420 00:09:40.769 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:40.769 traddr: 10.0.0.2 00:09:40.769 eflags: none 00:09:40.769 sectype: none 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:40.769 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:41.700 16:26:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:43.598 /dev/nvme0n1 ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:43.598 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:43.599 16:26:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.599 rmmod nvme_tcp 00:09:43.599 rmmod nvme_fabrics 00:09:43.599 rmmod nvme_keyring 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1457007 ']' 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1457007 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1457007 ']' 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1457007 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.599 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1457007 00:09:43.856 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.856 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.856 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1457007' 00:09:43.856 killing process with pid 1457007 00:09:43.856 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1457007 00:09:43.856 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1457007 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.115 16:26:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.016 16:26:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.016 00:09:46.016 real 0m8.814s 00:09:46.016 user 0m17.732s 00:09:46.016 sys 0m2.233s 00:09:46.016 16:26:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.016 16:26:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 ************************************ 00:09:46.016 END TEST nvmf_nvme_cli 00:09:46.016 ************************************ 00:09:46.016 16:26:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:46.016 16:26:25 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:46.016 16:26:25 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:46.016 16:26:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.016 16:26:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.016 16:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 ************************************ 00:09:46.016 START TEST nvmf_vfio_user 00:09:46.016 ************************************ 00:09:46.016 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:46.274 * Looking for test storage... 00:09:46.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.274 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1457933 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1457933' 00:09:46.275 Process pid: 1457933 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1457933 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1457933 ']' 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.275 16:26:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:46.275 [2024-07-15 16:26:25.729686] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:46.275 [2024-07-15 16:26:25.729776] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.275 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.275 [2024-07-15 16:26:25.792627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.533 [2024-07-15 16:26:25.913406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.533 [2024-07-15 16:26:25.913458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.533 [2024-07-15 16:26:25.913488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.533 [2024-07-15 16:26:25.913500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.533 [2024-07-15 16:26:25.913510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.533 [2024-07-15 16:26:25.913647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.533 [2024-07-15 16:26:25.913694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.533 [2024-07-15 16:26:25.913673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.533 [2024-07-15 16:26:25.913691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.533 16:26:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.533 16:26:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:46.533 16:26:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:47.465 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:47.722 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:47.722 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:47.722 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:47.722 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:47.722 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:47.981 Malloc1 00:09:47.981 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:48.238 16:26:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:48.496 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:48.754 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:48.754 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:48.754 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:49.012 Malloc2 00:09:49.012 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:49.269 16:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:49.526 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:49.783 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:49.783 [2024-07-15 16:26:29.343128] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:49.783 [2024-07-15 16:26:29.343185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458473 ] 00:09:49.783 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.783 [2024-07-15 16:26:29.378396] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:50.041 [2024-07-15 16:26:29.387433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:50.041 [2024-07-15 16:26:29.387462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd9f6ae000 00:09:50.041 [2024-07-15 16:26:29.388428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:50.041 [2024-07-15 16:26:29.389424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:50.041 [2024-07-15 16:26:29.390431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:50.041 [2024-07-15 16:26:29.391433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:50.041 [2024-07-15 16:26:29.392438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:50.042 [2024-07-15 16:26:29.393460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:50.042 [2024-07-15 16:26:29.394455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:50.042 [2024-07-15 16:26:29.395458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:50.042 [2024-07-15 16:26:29.396464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:50.042 [2024-07-15 16:26:29.396483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd9f6a3000 00:09:50.042 [2024-07-15 16:26:29.397623] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:50.042 [2024-07-15 16:26:29.411487] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:50.042 [2024-07-15 16:26:29.411522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:50.042 [2024-07-15 16:26:29.420599] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:50.042 [2024-07-15 16:26:29.420650] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:50.042 [2024-07-15 16:26:29.420755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:50.042 [2024-07-15 16:26:29.420784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:50.042 [2024-07-15 16:26:29.420794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:50.042 [2024-07-15 16:26:29.421593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:50.042 [2024-07-15 16:26:29.421613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:50.042 [2024-07-15 16:26:29.421625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:50.042 [2024-07-15 16:26:29.422602] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:50.042 [2024-07-15 16:26:29.422620] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:50.042 [2024-07-15 16:26:29.422633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.423601] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:50.042 [2024-07-15 16:26:29.423619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.424611] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:50.042 [2024-07-15 16:26:29.424630] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:50.042 [2024-07-15 16:26:29.424639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.424650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.424759] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:50.042 [2024-07-15 16:26:29.424767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.424779] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:50.042 [2024-07-15 16:26:29.425616] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:50.042 [2024-07-15 16:26:29.426618] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:50.042 [2024-07-15 16:26:29.427629] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:50.042 [2024-07-15 16:26:29.428627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:50.042 [2024-07-15 16:26:29.428735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:50.042 [2024-07-15 16:26:29.429649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:50.042 [2024-07-15 16:26:29.429666] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:50.042 [2024-07-15 16:26:29.429675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.429698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:50.042 [2024-07-15 16:26:29.429711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.429735] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:50.042 [2024-07-15 16:26:29.429744] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:50.042 [2024-07-15 16:26:29.429762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:50.042 [2024-07-15 16:26:29.429817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:50.042 [2024-07-15 16:26:29.429832] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:50.042 [2024-07-15 16:26:29.429844] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:50.042 [2024-07-15 16:26:29.429851] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:50.042 [2024-07-15 16:26:29.429874] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:50.042 [2024-07-15 16:26:29.429889] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:50.042 [2024-07-15 16:26:29.429897] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:50.042 [2024-07-15 16:26:29.429905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.429918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.429934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:50.042 [2024-07-15 16:26:29.429954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:50.042 [2024-07-15 16:26:29.429975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:50.042 [2024-07-15 16:26:29.429995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:50.042 [2024-07-15 16:26:29.430008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:50.042 [2024-07-15 16:26:29.430020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:50.042 [2024-07-15 16:26:29.430029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:50.042 [2024-07-15 16:26:29.430074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:50.042 [2024-07-15 16:26:29.430084] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:50.042 [2024-07-15 16:26:29.430093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:50.042 [2024-07-15 16:26:29.430138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:50.042 [2024-07-15 16:26:29.430219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:50.042 [2024-07-15 16:26:29.430262] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:50.042 [2024-07-15 16:26:29.430269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:50.042 [2024-07-15 16:26:29.430278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:50.042 [2024-07-15 16:26:29.430291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430307] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:50.043 [2024-07-15 16:26:29.430326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430352] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:50.043 [2024-07-15 16:26:29.430359] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:50.043 [2024-07-15 16:26:29.430368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430439] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:50.043 [2024-07-15 16:26:29.430447] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:50.043 [2024-07-15 16:26:29.430456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430539] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:50.043 [2024-07-15 16:26:29.430546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:50.043 [2024-07-15 16:26:29.430554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:50.043 [2024-07-15 16:26:29.430581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430703] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:50.043 [2024-07-15 16:26:29.430716] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:50.043 [2024-07-15 16:26:29.430722] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:50.043 [2024-07-15 16:26:29.430728] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:50.043 [2024-07-15 16:26:29.430737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:50.043 [2024-07-15 16:26:29.430748] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:50.043 [2024-07-15 16:26:29.430756] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:50.043 [2024-07-15 16:26:29.430764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430775] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:50.043 [2024-07-15 16:26:29.430782] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:50.043 [2024-07-15 16:26:29.430790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430802] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:50.043 [2024-07-15 16:26:29.430809] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:50.043 [2024-07-15 16:26:29.430817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:50.043 [2024-07-15 16:26:29.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:50.043 [2024-07-15 16:26:29.430904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:50.043 ===================================================== 00:09:50.043 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:50.043 ===================================================== 00:09:50.043 Controller Capabilities/Features 00:09:50.043 ================================ 00:09:50.043 Vendor ID: 4e58 00:09:50.043 Subsystem Vendor ID: 4e58 00:09:50.043 Serial Number: SPDK1 00:09:50.043 Model Number: SPDK bdev Controller 00:09:50.043 Firmware Version: 24.09 00:09:50.043 Recommended Arb Burst: 6 00:09:50.043 IEEE OUI Identifier: 8d 6b 50 00:09:50.043 Multi-path I/O 00:09:50.043 May have multiple subsystem ports: Yes 00:09:50.043 May have multiple controllers: Yes 00:09:50.043 Associated with SR-IOV VF: No 00:09:50.043 Max Data Transfer Size: 131072 00:09:50.043 Max Number of Namespaces: 32 00:09:50.043 Max Number of I/O Queues: 127 00:09:50.043 NVMe Specification Version (VS): 1.3 00:09:50.043 NVMe Specification Version (Identify): 1.3 00:09:50.043 Maximum Queue Entries: 256 00:09:50.043 Contiguous Queues Required: Yes 00:09:50.043 Arbitration Mechanisms Supported 00:09:50.043 Weighted Round Robin: Not Supported 00:09:50.043 Vendor Specific: Not Supported 00:09:50.043 Reset Timeout: 15000 ms 00:09:50.043 Doorbell Stride: 4 bytes 00:09:50.043 NVM Subsystem Reset: Not Supported 00:09:50.043 Command Sets Supported 00:09:50.043 NVM Command Set: Supported 00:09:50.043 Boot Partition: Not Supported 00:09:50.043 Memory Page Size Minimum: 4096 bytes 00:09:50.043 Memory Page Size Maximum: 4096 bytes 00:09:50.043 Persistent Memory Region: Not Supported 00:09:50.043 Optional Asynchronous Events Supported 00:09:50.043 Namespace Attribute Notices: Supported 00:09:50.043 Firmware Activation Notices: Not Supported 00:09:50.043 ANA Change Notices: Not Supported 00:09:50.043 PLE Aggregate Log Change Notices: Not Supported 00:09:50.043 LBA Status Info Alert Notices: Not Supported 00:09:50.043 EGE Aggregate Log Change Notices: Not Supported 00:09:50.043 Normal NVM Subsystem Shutdown event: Not Supported 00:09:50.043 Zone Descriptor Change Notices: Not Supported 00:09:50.043 Discovery Log Change Notices: Not Supported 00:09:50.043 Controller Attributes 00:09:50.043 128-bit Host Identifier: Supported 00:09:50.043 Non-Operational Permissive Mode: Not Supported 00:09:50.043 NVM Sets: Not Supported 00:09:50.043 Read Recovery Levels: Not Supported 00:09:50.043 Endurance Groups: Not Supported 00:09:50.043 Predictable Latency Mode: Not Supported 00:09:50.043 Traffic Based Keep ALive: Not Supported 00:09:50.043 Namespace Granularity: Not Supported 00:09:50.043 SQ Associations: Not Supported 00:09:50.044 UUID List: Not Supported 00:09:50.044 Multi-Domain Subsystem: Not Supported 00:09:50.044 Fixed Capacity Management: Not Supported 00:09:50.044 Variable Capacity Management: Not Supported 00:09:50.044 Delete Endurance Group: Not Supported 00:09:50.044 Delete NVM Set: Not Supported 00:09:50.044 Extended LBA Formats Supported: Not Supported 00:09:50.044 Flexible Data Placement Supported: Not Supported 00:09:50.044 00:09:50.044 Controller Memory Buffer Support 00:09:50.044 ================================ 00:09:50.044 Supported: No 00:09:50.044 00:09:50.044 Persistent Memory Region Support 00:09:50.044 ================================ 00:09:50.044 Supported: No 00:09:50.044 00:09:50.044 Admin Command Set Attributes 00:09:50.044 ============================ 00:09:50.044 Security Send/Receive: Not Supported 00:09:50.044 Format NVM: Not Supported 00:09:50.044 Firmware Activate/Download: Not Supported 00:09:50.044 Namespace Management: Not Supported 00:09:50.044 Device Self-Test: Not Supported 00:09:50.044 Directives: Not Supported 00:09:50.044 NVMe-MI: Not Supported 00:09:50.044 Virtualization Management: Not Supported 00:09:50.044 Doorbell Buffer Config: Not Supported 00:09:50.044 Get LBA Status Capability: Not Supported 00:09:50.044 Command & Feature Lockdown Capability: Not Supported 00:09:50.044 Abort Command Limit: 4 00:09:50.044 Async Event Request Limit: 4 00:09:50.044 Number of Firmware Slots: N/A 00:09:50.044 Firmware Slot 1 Read-Only: N/A 00:09:50.044 Firmware Activation Without Reset: N/A 00:09:50.044 Multiple Update Detection Support: N/A 00:09:50.044 Firmware Update Granularity: No Information Provided 00:09:50.044 Per-Namespace SMART Log: No 00:09:50.044 Asymmetric Namespace Access Log Page: Not Supported 00:09:50.044 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:50.044 Command Effects Log Page: Supported 00:09:50.044 Get Log Page Extended Data: Supported 00:09:50.044 Telemetry Log Pages: Not Supported 00:09:50.044 Persistent Event Log Pages: Not Supported 00:09:50.044 Supported Log Pages Log Page: May Support 00:09:50.044 Commands Supported & Effects Log Page: Not Supported 00:09:50.044 Feature Identifiers & Effects Log Page:May Support 00:09:50.044 NVMe-MI Commands & Effects Log Page: May Support 00:09:50.044 Data Area 4 for Telemetry Log: Not Supported 00:09:50.044 Error Log Page Entries Supported: 128 00:09:50.044 Keep Alive: Supported 00:09:50.044 Keep Alive Granularity: 10000 ms 00:09:50.044 00:09:50.044 NVM Command Set Attributes 00:09:50.044 ========================== 00:09:50.044 Submission Queue Entry Size 00:09:50.044 Max: 64 00:09:50.044 Min: 64 00:09:50.044 Completion Queue Entry Size 00:09:50.044 Max: 16 00:09:50.044 Min: 16 00:09:50.044 Number of Namespaces: 32 00:09:50.044 Compare Command: Supported 00:09:50.044 Write Uncorrectable Command: Not Supported 00:09:50.044 Dataset Management Command: Supported 00:09:50.044 Write Zeroes Command: Supported 00:09:50.044 Set Features Save Field: Not Supported 00:09:50.044 Reservations: Not Supported 00:09:50.044 Timestamp: Not Supported 00:09:50.044 Copy: Supported 00:09:50.044 Volatile Write Cache: Present 00:09:50.044 Atomic Write Unit (Normal): 1 00:09:50.044 Atomic Write Unit (PFail): 1 00:09:50.044 Atomic Compare & Write Unit: 1 00:09:50.044 Fused Compare & Write: Supported 00:09:50.044 Scatter-Gather List 00:09:50.044 SGL Command Set: Supported (Dword aligned) 00:09:50.044 SGL Keyed: Not Supported 00:09:50.044 SGL Bit Bucket Descriptor: Not Supported 00:09:50.044 SGL Metadata Pointer: Not Supported 00:09:50.044 Oversized SGL: Not Supported 00:09:50.044 SGL Metadata Address: Not Supported 00:09:50.044 SGL Offset: Not Supported 00:09:50.044 Transport SGL Data Block: Not Supported 00:09:50.044 Replay Protected Memory Block: Not Supported 00:09:50.044 00:09:50.044 Firmware Slot Information 00:09:50.044 ========================= 00:09:50.044 Active slot: 1 00:09:50.044 Slot 1 Firmware Revision: 24.09 00:09:50.044 00:09:50.044 00:09:50.044 Commands Supported and Effects 00:09:50.044 ============================== 00:09:50.044 Admin Commands 00:09:50.044 -------------- 00:09:50.044 Get Log Page (02h): Supported 00:09:50.044 Identify (06h): Supported 00:09:50.044 Abort (08h): Supported 00:09:50.044 Set Features (09h): Supported 00:09:50.044 Get Features (0Ah): Supported 00:09:50.044 Asynchronous Event Request (0Ch): Supported 00:09:50.044 Keep Alive (18h): Supported 00:09:50.044 I/O Commands 00:09:50.044 ------------ 00:09:50.044 Flush (00h): Supported LBA-Change 00:09:50.044 Write (01h): Supported LBA-Change 00:09:50.044 Read (02h): Supported 00:09:50.044 Compare (05h): Supported 00:09:50.044 Write Zeroes (08h): Supported LBA-Change 00:09:50.044 Dataset Management (09h): Supported LBA-Change 00:09:50.044 Copy (19h): Supported LBA-Change 00:09:50.044 00:09:50.044 Error Log 00:09:50.044 ========= 00:09:50.044 00:09:50.044 Arbitration 00:09:50.044 =========== 00:09:50.044 Arbitration Burst: 1 00:09:50.044 00:09:50.044 Power Management 00:09:50.044 ================ 00:09:50.044 Number of Power States: 1 00:09:50.044 Current Power State: Power State #0 00:09:50.044 Power State #0: 00:09:50.044 Max Power: 0.00 W 00:09:50.044 Non-Operational State: Operational 00:09:50.044 Entry Latency: Not Reported 00:09:50.044 Exit Latency: Not Reported 00:09:50.044 Relative Read Throughput: 0 00:09:50.044 Relative Read Latency: 0 00:09:50.044 Relative Write Throughput: 0 00:09:50.044 Relative Write Latency: 0 00:09:50.044 Idle Power: Not Reported 00:09:50.044 Active Power: Not Reported 00:09:50.044 Non-Operational Permissive Mode: Not Supported 00:09:50.044 00:09:50.044 Health Information 00:09:50.044 ================== 00:09:50.044 Critical Warnings: 00:09:50.044 Available Spare Space: OK 00:09:50.044 Temperature: OK 00:09:50.044 Device Reliability: OK 00:09:50.044 Read Only: No 00:09:50.044 Volatile Memory Backup: OK 00:09:50.044 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:50.044 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:50.044 Available Spare: 0% 00:09:50.044 Available Sp[2024-07-15 16:26:29.431037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:50.044 [2024-07-15 16:26:29.431054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:50.044 [2024-07-15 16:26:29.431102] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:50.044 [2024-07-15 16:26:29.431121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.044 [2024-07-15 16:26:29.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.044 [2024-07-15 16:26:29.431142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.044 [2024-07-15 16:26:29.431153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:50.044 [2024-07-15 16:26:29.431664] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:50.044 [2024-07-15 16:26:29.431683] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:50.044 [2024-07-15 16:26:29.432661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.044 [2024-07-15 16:26:29.432751] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:50.045 [2024-07-15 16:26:29.432770] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:50.045 [2024-07-15 16:26:29.433672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:50.045 [2024-07-15 16:26:29.433695] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:50.045 [2024-07-15 16:26:29.433748] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:50.045 [2024-07-15 16:26:29.435710] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:50.045 are Threshold: 0% 00:09:50.045 Life Percentage Used: 0% 00:09:50.045 Data Units Read: 0 00:09:50.045 Data Units Written: 0 00:09:50.045 Host Read Commands: 0 00:09:50.045 Host Write Commands: 0 00:09:50.045 Controller Busy Time: 0 minutes 00:09:50.045 Power Cycles: 0 00:09:50.045 Power On Hours: 0 hours 00:09:50.045 Unsafe Shutdowns: 0 00:09:50.045 Unrecoverable Media Errors: 0 00:09:50.045 Lifetime Error Log Entries: 0 00:09:50.045 Warning Temperature Time: 0 minutes 00:09:50.045 Critical Temperature Time: 0 minutes 00:09:50.045 00:09:50.045 Number of Queues 00:09:50.045 ================ 00:09:50.045 Number of I/O Submission Queues: 127 00:09:50.045 Number of I/O Completion Queues: 127 00:09:50.045 00:09:50.045 Active Namespaces 00:09:50.045 ================= 00:09:50.045 Namespace ID:1 00:09:50.045 Error Recovery Timeout: Unlimited 00:09:50.045 Command Set Identifier: NVM (00h) 00:09:50.045 Deallocate: Supported 00:09:50.045 Deallocated/Unwritten Error: Not Supported 00:09:50.045 Deallocated Read Value: Unknown 00:09:50.045 Deallocate in Write Zeroes: Not Supported 00:09:50.045 Deallocated Guard Field: 0xFFFF 00:09:50.045 Flush: Supported 00:09:50.045 Reservation: Supported 00:09:50.045 Namespace Sharing Capabilities: Multiple Controllers 00:09:50.045 Size (in LBAs): 131072 (0GiB) 00:09:50.045 Capacity (in LBAs): 131072 (0GiB) 00:09:50.045 Utilization (in LBAs): 131072 (0GiB) 00:09:50.045 NGUID: AFAEE8F48E274822AEED212F0295F855 00:09:50.045 UUID: afaee8f4-8e27-4822-aeed-212f0295f855 00:09:50.045 Thin Provisioning: Not Supported 00:09:50.045 Per-NS Atomic Units: Yes 00:09:50.045 Atomic Boundary Size (Normal): 0 00:09:50.045 Atomic Boundary Size (PFail): 0 00:09:50.045 Atomic Boundary Offset: 0 00:09:50.045 Maximum Single Source Range Length: 65535 00:09:50.045 Maximum Copy Length: 65535 00:09:50.045 Maximum Source Range Count: 1 00:09:50.045 NGUID/EUI64 Never Reused: No 00:09:50.045 Namespace Write Protected: No 00:09:50.045 Number of LBA Formats: 1 00:09:50.045 Current LBA Format: LBA Format #00 00:09:50.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:50.045 00:09:50.045 16:26:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:50.045 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.302 [2024-07-15 16:26:29.664724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:55.592 Initializing NVMe Controllers 00:09:55.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:55.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:55.592 Initialization complete. Launching workers. 00:09:55.592 ======================================================== 00:09:55.592 Latency(us) 00:09:55.592 Device Information : IOPS MiB/s Average min max 00:09:55.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34555.79 134.98 3705.05 1145.42 11652.71 00:09:55.592 ======================================================== 00:09:55.592 Total : 34555.79 134.98 3705.05 1145.42 11652.71 00:09:55.592 00:09:55.592 [2024-07-15 16:26:34.686533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:55.592 16:26:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:55.592 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.592 [2024-07-15 16:26:34.932654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:00.854 Initializing NVMe Controllers 00:10:00.854 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:00.855 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:00.855 Initialization complete. Launching workers. 00:10:00.855 ======================================================== 00:10:00.855 Latency(us) 00:10:00.855 Device Information : IOPS MiB/s Average min max 00:10:00.855 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8008.37 7370.85 15980.41 00:10:00.855 ======================================================== 00:10:00.855 Total : 16000.00 62.50 8008.37 7370.85 15980.41 00:10:00.855 00:10:00.855 [2024-07-15 16:26:39.968640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:00.855 16:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:00.855 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.855 [2024-07-15 16:26:40.182708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:06.117 [2024-07-15 16:26:45.251310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:06.117 Initializing NVMe Controllers 00:10:06.117 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:06.117 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:06.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:06.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:06.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:06.117 Initialization complete. Launching workers. 00:10:06.117 Starting thread on core 2 00:10:06.117 Starting thread on core 3 00:10:06.117 Starting thread on core 1 00:10:06.117 16:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:06.117 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.117 [2024-07-15 16:26:45.564327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.402 [2024-07-15 16:26:48.621631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.402 Initializing NVMe Controllers 00:10:09.402 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:09.402 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:09.402 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:09.402 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:09.402 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:09.402 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:09.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:09.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:09.402 Initialization complete. Launching workers. 00:10:09.402 Starting thread on core 1 with urgent priority queue 00:10:09.402 Starting thread on core 2 with urgent priority queue 00:10:09.402 Starting thread on core 3 with urgent priority queue 00:10:09.402 Starting thread on core 0 with urgent priority queue 00:10:09.402 SPDK bdev Controller (SPDK1 ) core 0: 4733.67 IO/s 21.13 secs/100000 ios 00:10:09.402 SPDK bdev Controller (SPDK1 ) core 1: 4862.33 IO/s 20.57 secs/100000 ios 00:10:09.402 SPDK bdev Controller (SPDK1 ) core 2: 4763.67 IO/s 20.99 secs/100000 ios 00:10:09.402 SPDK bdev Controller (SPDK1 ) core 3: 4751.67 IO/s 21.05 secs/100000 ios 00:10:09.402 ======================================================== 00:10:09.402 00:10:09.402 16:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:09.402 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.402 [2024-07-15 16:26:48.920383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.402 Initializing NVMe Controllers 00:10:09.402 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:09.402 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:09.402 Namespace ID: 1 size: 0GB 00:10:09.402 Initialization complete. 00:10:09.402 INFO: using host memory buffer for IO 00:10:09.402 Hello world! 00:10:09.402 [2024-07-15 16:26:48.953971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.660 16:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:09.660 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.660 [2024-07-15 16:26:49.256350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:11.034 Initializing NVMe Controllers 00:10:11.034 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.034 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.034 Initialization complete. Launching workers. 00:10:11.034 submit (in ns) avg, min, max = 8692.0, 3516.7, 4004197.8 00:10:11.034 complete (in ns) avg, min, max = 23800.2, 2067.8, 6995828.9 00:10:11.034 00:10:11.034 Submit histogram 00:10:11.034 ================ 00:10:11.034 Range in us Cumulative Count 00:10:11.034 3.508 - 3.532: 0.0073% ( 1) 00:10:11.034 3.532 - 3.556: 0.0879% ( 11) 00:10:11.034 3.556 - 3.579: 0.8571% ( 105) 00:10:11.034 3.579 - 3.603: 2.7546% ( 259) 00:10:11.034 3.603 - 3.627: 6.7692% ( 548) 00:10:11.034 3.627 - 3.650: 13.2308% ( 882) 00:10:11.034 3.650 - 3.674: 21.3919% ( 1114) 00:10:11.034 3.674 - 3.698: 29.1575% ( 1060) 00:10:11.034 3.698 - 3.721: 37.3773% ( 1122) 00:10:11.034 3.721 - 3.745: 43.9194% ( 893) 00:10:11.034 3.745 - 3.769: 49.5971% ( 775) 00:10:11.034 3.769 - 3.793: 54.5788% ( 680) 00:10:11.034 3.793 - 3.816: 58.8278% ( 580) 00:10:11.034 3.816 - 3.840: 62.3150% ( 476) 00:10:11.034 3.840 - 3.864: 66.3150% ( 546) 00:10:11.034 3.864 - 3.887: 70.4982% ( 571) 00:10:11.034 3.887 - 3.911: 74.8791% ( 598) 00:10:11.034 3.911 - 3.935: 78.8498% ( 542) 00:10:11.034 3.935 - 3.959: 82.0586% ( 438) 00:10:11.034 3.959 - 3.982: 84.6960% ( 360) 00:10:11.034 3.982 - 4.006: 86.8059% ( 288) 00:10:11.034 4.006 - 4.030: 88.4762% ( 228) 00:10:11.034 4.030 - 4.053: 89.9487% ( 201) 00:10:11.034 4.053 - 4.077: 91.1282% ( 161) 00:10:11.034 4.077 - 4.101: 92.1319% ( 137) 00:10:11.034 4.101 - 4.124: 93.0769% ( 129) 00:10:11.034 4.124 - 4.148: 94.0586% ( 134) 00:10:11.034 4.148 - 4.172: 94.7692% ( 97) 00:10:11.034 4.172 - 4.196: 95.2747% ( 69) 00:10:11.034 4.196 - 4.219: 95.6557% ( 52) 00:10:11.034 4.219 - 4.243: 96.0000% ( 47) 00:10:11.034 4.243 - 4.267: 96.2198% ( 30) 00:10:11.034 4.267 - 4.290: 96.3883% ( 23) 00:10:11.034 4.290 - 4.314: 96.5055% ( 16) 00:10:11.034 4.314 - 4.338: 96.6154% ( 15) 00:10:11.034 4.338 - 4.361: 96.7399% ( 17) 00:10:11.034 4.361 - 4.385: 96.8718% ( 18) 00:10:11.034 4.385 - 4.409: 96.9524% ( 11) 00:10:11.034 4.409 - 4.433: 97.0696% ( 16) 00:10:11.034 4.433 - 4.456: 97.1209% ( 7) 00:10:11.034 4.456 - 4.480: 97.1648% ( 6) 00:10:11.034 4.480 - 4.504: 97.1868% ( 3) 00:10:11.034 4.504 - 4.527: 97.2015% ( 2) 00:10:11.034 4.527 - 4.551: 97.2088% ( 1) 00:10:11.034 4.551 - 4.575: 97.2234% ( 2) 00:10:11.034 4.599 - 4.622: 97.2308% ( 1) 00:10:11.034 4.622 - 4.646: 97.2527% ( 3) 00:10:11.034 4.693 - 4.717: 97.2674% ( 2) 00:10:11.034 4.741 - 4.764: 97.2821% ( 2) 00:10:11.034 4.764 - 4.788: 97.3040% ( 3) 00:10:11.034 4.788 - 4.812: 97.3553% ( 7) 00:10:11.034 4.812 - 4.836: 97.3700% ( 2) 00:10:11.034 4.836 - 4.859: 97.3993% ( 4) 00:10:11.034 4.859 - 4.883: 97.4212% ( 3) 00:10:11.034 4.883 - 4.907: 97.4945% ( 10) 00:10:11.034 4.907 - 4.930: 97.5385% ( 6) 00:10:11.034 4.930 - 4.954: 97.5751% ( 5) 00:10:11.034 4.954 - 4.978: 97.6044% ( 4) 00:10:11.034 4.978 - 5.001: 97.6337% ( 4) 00:10:11.034 5.001 - 5.025: 97.7143% ( 11) 00:10:11.034 5.025 - 5.049: 97.7436% ( 4) 00:10:11.034 5.049 - 5.073: 97.7729% ( 4) 00:10:11.034 5.073 - 5.096: 97.8168% ( 6) 00:10:11.034 5.096 - 5.120: 97.8242% ( 1) 00:10:11.034 5.120 - 5.144: 97.8535% ( 4) 00:10:11.034 5.144 - 5.167: 97.8901% ( 5) 00:10:11.034 5.167 - 5.191: 97.9048% ( 2) 00:10:11.034 5.191 - 5.215: 97.9560% ( 7) 00:10:11.034 5.215 - 5.239: 97.9853% ( 4) 00:10:11.034 5.239 - 5.262: 97.9927% ( 1) 00:10:11.034 5.286 - 5.310: 98.0293% ( 5) 00:10:11.034 5.310 - 5.333: 98.0366% ( 1) 00:10:11.034 5.333 - 5.357: 98.0440% ( 1) 00:10:11.034 5.357 - 5.381: 98.0513% ( 1) 00:10:11.034 5.452 - 5.476: 98.0586% ( 1) 00:10:11.034 5.499 - 5.523: 98.0879% ( 4) 00:10:11.034 5.547 - 5.570: 98.0952% ( 1) 00:10:11.034 5.570 - 5.594: 98.1026% ( 1) 00:10:11.034 5.618 - 5.641: 98.1099% ( 1) 00:10:11.034 5.831 - 5.855: 98.1172% ( 1) 00:10:11.034 5.950 - 5.973: 98.1245% ( 1) 00:10:11.034 5.973 - 5.997: 98.1319% ( 1) 00:10:11.034 6.210 - 6.258: 98.1392% ( 1) 00:10:11.034 6.400 - 6.447: 98.1465% ( 1) 00:10:11.034 6.637 - 6.684: 98.1538% ( 1) 00:10:11.034 6.779 - 6.827: 98.1612% ( 1) 00:10:11.034 6.969 - 7.016: 98.1685% ( 1) 00:10:11.034 7.016 - 7.064: 98.1758% ( 1) 00:10:11.034 7.064 - 7.111: 98.1905% ( 2) 00:10:11.034 7.206 - 7.253: 98.1978% ( 1) 00:10:11.034 7.348 - 7.396: 98.2051% ( 1) 00:10:11.034 7.443 - 7.490: 98.2125% ( 1) 00:10:11.034 7.490 - 7.538: 98.2271% ( 2) 00:10:11.034 7.585 - 7.633: 98.2418% ( 2) 00:10:11.034 7.633 - 7.680: 98.2564% ( 2) 00:10:11.034 7.727 - 7.775: 98.2637% ( 1) 00:10:11.034 7.775 - 7.822: 98.2784% ( 2) 00:10:11.034 7.822 - 7.870: 98.2857% ( 1) 00:10:11.034 7.870 - 7.917: 98.2930% ( 1) 00:10:11.034 7.917 - 7.964: 98.3077% ( 2) 00:10:11.034 7.964 - 8.012: 98.3150% ( 1) 00:10:11.034 8.201 - 8.249: 98.3223% ( 1) 00:10:11.034 8.249 - 8.296: 98.3297% ( 1) 00:10:11.034 8.296 - 8.344: 98.3443% ( 2) 00:10:11.034 8.344 - 8.391: 98.3516% ( 1) 00:10:11.034 8.486 - 8.533: 98.3590% ( 1) 00:10:11.034 8.581 - 8.628: 98.3810% ( 3) 00:10:11.034 8.676 - 8.723: 98.3956% ( 2) 00:10:11.034 8.723 - 8.770: 98.4029% ( 1) 00:10:11.034 8.770 - 8.818: 98.4103% ( 1) 00:10:11.034 8.865 - 8.913: 98.4249% ( 2) 00:10:11.034 8.913 - 8.960: 98.4469% ( 3) 00:10:11.034 8.960 - 9.007: 98.4542% ( 1) 00:10:11.034 9.055 - 9.102: 98.4615% ( 1) 00:10:11.034 9.102 - 9.150: 98.4689% ( 1) 00:10:11.034 9.150 - 9.197: 98.4835% ( 2) 00:10:11.034 9.244 - 9.292: 98.4908% ( 1) 00:10:11.034 9.339 - 9.387: 98.5055% ( 2) 00:10:11.034 9.387 - 9.434: 98.5128% ( 1) 00:10:11.034 9.434 - 9.481: 98.5201% ( 1) 00:10:11.034 9.529 - 9.576: 98.5348% ( 2) 00:10:11.034 9.576 - 9.624: 98.5495% ( 2) 00:10:11.034 9.624 - 9.671: 98.5641% ( 2) 00:10:11.034 9.671 - 9.719: 98.5714% ( 1) 00:10:11.034 9.861 - 9.908: 98.5788% ( 1) 00:10:11.034 10.003 - 10.050: 98.5934% ( 2) 00:10:11.034 10.050 - 10.098: 98.6007% ( 1) 00:10:11.034 10.193 - 10.240: 98.6081% ( 1) 00:10:11.034 10.240 - 10.287: 98.6154% ( 1) 00:10:11.034 10.335 - 10.382: 98.6227% ( 1) 00:10:11.034 10.524 - 10.572: 98.6300% ( 1) 00:10:11.034 10.667 - 10.714: 98.6374% ( 1) 00:10:11.034 10.714 - 10.761: 98.6447% ( 1) 00:10:11.034 10.809 - 10.856: 98.6667% ( 3) 00:10:11.034 10.856 - 10.904: 98.6813% ( 2) 00:10:11.034 10.904 - 10.951: 98.6886% ( 1) 00:10:11.034 11.046 - 11.093: 98.6960% ( 1) 00:10:11.034 11.141 - 11.188: 98.7033% ( 1) 00:10:11.034 11.188 - 11.236: 98.7106% ( 1) 00:10:11.034 11.330 - 11.378: 98.7253% ( 2) 00:10:11.034 11.425 - 11.473: 98.7326% ( 1) 00:10:11.034 11.567 - 11.615: 98.7399% ( 1) 00:10:11.034 11.662 - 11.710: 98.7473% ( 1) 00:10:11.034 11.710 - 11.757: 98.7546% ( 1) 00:10:11.034 11.804 - 11.852: 98.7619% ( 1) 00:10:11.034 11.994 - 12.041: 98.7692% ( 1) 00:10:11.034 12.041 - 12.089: 98.7766% ( 1) 00:10:11.034 12.136 - 12.231: 98.7839% ( 1) 00:10:11.034 12.231 - 12.326: 98.7912% ( 1) 00:10:11.034 12.326 - 12.421: 98.7985% ( 1) 00:10:11.034 12.421 - 12.516: 98.8059% ( 1) 00:10:11.034 13.084 - 13.179: 98.8132% ( 1) 00:10:11.034 13.274 - 13.369: 98.8205% ( 1) 00:10:11.034 13.653 - 13.748: 98.8352% ( 2) 00:10:11.034 13.843 - 13.938: 98.8425% ( 1) 00:10:11.034 13.938 - 14.033: 98.8718% ( 4) 00:10:11.034 14.412 - 14.507: 98.8791% ( 1) 00:10:11.034 14.507 - 14.601: 98.9011% ( 3) 00:10:11.034 14.601 - 14.696: 98.9084% ( 1) 00:10:11.034 15.170 - 15.265: 98.9158% ( 1) 00:10:11.034 15.265 - 15.360: 98.9304% ( 2) 00:10:11.034 17.067 - 17.161: 98.9451% ( 2) 00:10:11.034 17.256 - 17.351: 98.9670% ( 3) 00:10:11.034 17.351 - 17.446: 99.0037% ( 5) 00:10:11.034 17.446 - 17.541: 99.0110% ( 1) 00:10:11.034 17.541 - 17.636: 99.0403% ( 4) 00:10:11.034 17.636 - 17.730: 99.0769% ( 5) 00:10:11.034 17.730 - 17.825: 99.1355% ( 8) 00:10:11.034 17.825 - 17.920: 99.1795% ( 6) 00:10:11.034 17.920 - 18.015: 99.2234% ( 6) 00:10:11.034 18.015 - 18.110: 99.2527% ( 4) 00:10:11.034 18.110 - 18.204: 99.3114% ( 8) 00:10:11.034 18.204 - 18.299: 99.4139% ( 14) 00:10:11.034 18.299 - 18.394: 99.4945% ( 11) 00:10:11.034 18.394 - 18.489: 99.5897% ( 13) 00:10:11.034 18.489 - 18.584: 99.6703% ( 11) 00:10:11.034 18.584 - 18.679: 99.7216% ( 7) 00:10:11.035 18.679 - 18.773: 99.7509% ( 4) 00:10:11.035 18.773 - 18.868: 99.7582% ( 1) 00:10:11.035 18.868 - 18.963: 99.7802% ( 3) 00:10:11.035 18.963 - 19.058: 99.7949% ( 2) 00:10:11.035 19.058 - 19.153: 99.8095% ( 2) 00:10:11.035 19.153 - 19.247: 99.8168% ( 1) 00:10:11.035 19.247 - 19.342: 99.8242% ( 1) 00:10:11.035 19.437 - 19.532: 99.8315% ( 1) 00:10:11.035 19.721 - 19.816: 99.8388% ( 1) 00:10:11.035 21.428 - 21.523: 99.8462% ( 1) 00:10:11.035 24.178 - 24.273: 99.8535% ( 1) 00:10:11.035 25.031 - 25.221: 99.8608% ( 1) 00:10:11.035 27.686 - 27.876: 99.8681% ( 1) 00:10:11.035 27.876 - 28.065: 99.8755% ( 1) 00:10:11.035 29.393 - 29.582: 99.8828% ( 1) 00:10:11.035 3980.705 - 4004.978: 100.0000% ( 16) 00:10:11.035 00:10:11.035 Complete histogram 00:10:11.035 ================== 00:10:11.035 Range in us Cumulative Count 00:10:11.035 2.062 - 2.074: 0.4249% ( 58) 00:10:11.035 2.074 - 2.086: 21.4066% ( 2864) 00:10:11.035 2.086 - 2.098: 42.4103% ( 2867) 00:10:11.035 2.098 - 2.110: 45.7582% ( 457) 00:10:11.035 2.110 - 2.121: 52.4689% ( 916) 00:10:11.035 2.121 - 2.133: 55.4945% ( 413) 00:10:11.035 2.133 - 2.145: 58.0513% ( 349) 00:10:11.035 2.145 - 2.157: 68.7473% ( 1460) 00:10:11.035 2.157 - 2.169: 73.4945% ( 648) 00:10:11.035 2.169 - 2.181: 75.1062% ( 220) 00:10:11.035 2.181 - 2.193: 77.9194% ( 384) 00:10:11.035 2.193 - 2.204: 79.3626% ( 197) 00:10:11.035 2.204 - 2.216: 80.4103% ( 143) 00:10:11.035 2.216 - 2.228: 85.1795% ( 651) 00:10:11.035 2.228 - 2.240: 88.6593% ( 475) 00:10:11.035 2.240 - 2.252: 90.5275% ( 255) 00:10:11.035 2.252 - 2.264: 92.6374% ( 288) 00:10:11.035 2.264 - 2.276: 93.3993% ( 104) 00:10:11.035 2.276 - 2.287: 93.8388% ( 60) 00:10:11.035 2.287 - 2.299: 94.3297% ( 67) 00:10:11.035 2.299 - 2.311: 94.8571% ( 72) 00:10:11.035 2.311 - 2.323: 95.6190% ( 104) 00:10:11.035 2.323 - 2.335: 95.7802% ( 22) 00:10:11.035 2.335 - 2.347: 95.8462% ( 9) 00:10:11.035 2.347 - 2.359: 95.9634% ( 16) 00:10:11.035 2.359 - 2.370: 96.0733% ( 15) 00:10:11.035 2.370 - 2.382: 96.2711% ( 27) 00:10:11.035 2.382 - 2.394: 96.5495% ( 38) 00:10:11.035 2.394 - 2.406: 96.8352% ( 39) 00:10:11.035 2.406 - 2.418: 97.0037% ( 23) 00:10:11.035 2.418 - 2.430: 97.1575% ( 21) 00:10:11.035 2.430 - 2.441: 97.3260% ( 23) 00:10:11.035 2.441 - 2.453: 97.5165% ( 26) 00:10:11.035 2.453 - 2.465: 97.6777% ( 22) 00:10:11.035 2.465 - 2.477: 97.8168% ( 19) 00:10:11.035 2.477 - 2.489: 97.9707% ( 21) 00:10:11.035 2.489 - 2.501: 98.0659% ( 13) 00:10:11.035 2.501 - 2.513: 98.1172% ( 7) 00:10:11.035 2.513 - 2.524: 98.1612% ( 6) 00:10:11.035 2.524 - 2.536: 98.2198% ( 8) 00:10:11.035 2.536 - 2.548: 98.2564% ( 5) 00:10:11.035 2.548 - 2.560: 98.2930% ( 5) 00:10:11.035 2.560 - 2.572: 98.3370% ( 6) 00:10:11.035 2.572 - 2.584: 98.3443% ( 1) 00:10:11.035 2.584 - 2.596: 98.3590% ( 2) 00:10:11.035 2.596 - 2.607: 98.3883% ( 4) 00:10:11.035 2.607 - 2.619: 98.3956% ( 1) 00:10:11.035 2.619 - 2.631: 98.4103% ( 2) 00:10:11.035 2.631 - 2.643: 98.4249% ( 2) 00:10:11.035 2.643 - 2.655: 98.4469% ( 3) 00:10:11.035 2.667 - 2.679: 98.4542% ( 1) 00:10:11.035 2.690 - 2.702: 98.4615% ( 1) 00:10:11.035 2.714 - 2.726: 98.4689% ( 1) 00:10:11.035 2.726 - 2.738: 98.4762% ( 1) 00:10:11.035 2.750 - 2.761: 98.4835% ( 1) 00:10:11.035 2.809 - 2.821: 98.4908% ( 1) 00:10:11.035 2.844 - 2.856: 98.4982% ( 1) 00:10:11.035 2.904 - 2.916: 98.5128% ( 2) 00:10:11.035 2.939 - 2.951: 9[2024-07-15 16:26:50.282541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:11.035 8.5201% ( 1) 00:10:11.035 3.295 - 3.319: 98.5275% ( 1) 00:10:11.035 3.366 - 3.390: 98.5348% ( 1) 00:10:11.035 3.413 - 3.437: 98.5495% ( 2) 00:10:11.035 3.461 - 3.484: 98.5568% ( 1) 00:10:11.035 3.484 - 3.508: 98.5641% ( 1) 00:10:11.035 3.508 - 3.532: 98.5788% ( 2) 00:10:11.035 3.532 - 3.556: 98.5861% ( 1) 00:10:11.035 3.579 - 3.603: 98.5934% ( 1) 00:10:11.035 3.627 - 3.650: 98.6007% ( 1) 00:10:11.035 3.674 - 3.698: 98.6081% ( 1) 00:10:11.035 3.698 - 3.721: 98.6154% ( 1) 00:10:11.035 3.769 - 3.793: 98.6227% ( 1) 00:10:11.035 3.840 - 3.864: 98.6447% ( 3) 00:10:11.035 3.911 - 3.935: 98.6593% ( 2) 00:10:11.035 3.959 - 3.982: 98.6740% ( 2) 00:10:11.035 3.982 - 4.006: 98.6886% ( 2) 00:10:11.035 4.006 - 4.030: 98.6960% ( 1) 00:10:11.035 4.124 - 4.148: 98.7033% ( 1) 00:10:11.035 4.456 - 4.480: 98.7106% ( 1) 00:10:11.035 4.622 - 4.646: 98.7179% ( 1) 00:10:11.035 4.859 - 4.883: 98.7253% ( 1) 00:10:11.035 5.618 - 5.641: 98.7326% ( 1) 00:10:11.035 6.116 - 6.163: 98.7399% ( 1) 00:10:11.035 6.163 - 6.210: 98.7473% ( 1) 00:10:11.035 6.400 - 6.447: 98.7546% ( 1) 00:10:11.035 6.447 - 6.495: 98.7619% ( 1) 00:10:11.035 6.590 - 6.637: 98.7692% ( 1) 00:10:11.035 6.637 - 6.684: 98.7766% ( 1) 00:10:11.035 7.111 - 7.159: 98.7839% ( 1) 00:10:11.035 7.348 - 7.396: 98.7912% ( 1) 00:10:11.035 7.585 - 7.633: 98.7985% ( 1) 00:10:11.035 7.775 - 7.822: 98.8059% ( 1) 00:10:11.035 7.822 - 7.870: 98.8132% ( 1) 00:10:11.035 7.917 - 7.964: 98.8205% ( 1) 00:10:11.035 8.107 - 8.154: 98.8278% ( 1) 00:10:11.035 9.529 - 9.576: 98.8352% ( 1) 00:10:11.035 15.170 - 15.265: 98.8425% ( 1) 00:10:11.035 15.455 - 15.550: 98.8571% ( 2) 00:10:11.035 15.644 - 15.739: 98.8645% ( 1) 00:10:11.035 15.739 - 15.834: 98.8718% ( 1) 00:10:11.035 15.834 - 15.929: 98.9011% ( 4) 00:10:11.035 16.024 - 16.119: 98.9890% ( 12) 00:10:11.035 16.119 - 16.213: 99.0183% ( 4) 00:10:11.035 16.213 - 16.308: 99.0330% ( 2) 00:10:11.035 16.308 - 16.403: 99.0623% ( 4) 00:10:11.035 16.403 - 16.498: 99.0769% ( 2) 00:10:11.035 16.498 - 16.593: 99.1209% ( 6) 00:10:11.035 16.593 - 16.687: 99.1722% ( 7) 00:10:11.035 16.687 - 16.782: 99.2308% ( 8) 00:10:11.035 16.782 - 16.877: 99.2674% ( 5) 00:10:11.035 16.877 - 16.972: 99.2967% ( 4) 00:10:11.035 16.972 - 17.067: 99.3407% ( 6) 00:10:11.035 17.067 - 17.161: 99.3700% ( 4) 00:10:11.035 17.161 - 17.256: 99.3773% ( 1) 00:10:11.035 17.256 - 17.351: 99.3846% ( 1) 00:10:11.035 17.351 - 17.446: 99.3919% ( 1) 00:10:11.035 17.446 - 17.541: 99.4066% ( 2) 00:10:11.035 17.636 - 17.730: 99.4139% ( 1) 00:10:11.035 17.825 - 17.920: 99.4212% ( 1) 00:10:11.035 17.920 - 18.015: 99.4286% ( 1) 00:10:11.035 18.204 - 18.299: 99.4359% ( 1) 00:10:11.035 18.299 - 18.394: 99.4505% ( 2) 00:10:11.035 18.489 - 18.584: 99.4579% ( 1) 00:10:11.035 18.773 - 18.868: 99.4652% ( 1) 00:10:11.035 1134.744 - 1140.812: 99.4725% ( 1) 00:10:11.035 3980.705 - 4004.978: 99.8608% ( 53) 00:10:11.035 4004.978 - 4029.250: 99.9853% ( 17) 00:10:11.035 5995.330 - 6019.603: 99.9927% ( 1) 00:10:11.035 6990.507 - 7039.052: 100.0000% ( 1) 00:10:11.035 00:10:11.035 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:11.035 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:11.035 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:11.035 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:11.035 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:11.035 [ 00:10:11.035 { 00:10:11.035 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:11.035 "subtype": "Discovery", 00:10:11.035 "listen_addresses": [], 00:10:11.035 "allow_any_host": true, 00:10:11.035 "hosts": [] 00:10:11.035 }, 00:10:11.035 { 00:10:11.035 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:11.035 "subtype": "NVMe", 00:10:11.035 "listen_addresses": [ 00:10:11.035 { 00:10:11.035 "trtype": "VFIOUSER", 00:10:11.035 "adrfam": "IPv4", 00:10:11.035 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:11.035 "trsvcid": "0" 00:10:11.035 } 00:10:11.035 ], 00:10:11.035 "allow_any_host": true, 00:10:11.035 "hosts": [], 00:10:11.035 "serial_number": "SPDK1", 00:10:11.035 "model_number": "SPDK bdev Controller", 00:10:11.035 "max_namespaces": 32, 00:10:11.035 "min_cntlid": 1, 00:10:11.035 "max_cntlid": 65519, 00:10:11.035 "namespaces": [ 00:10:11.035 { 00:10:11.035 "nsid": 1, 00:10:11.035 "bdev_name": "Malloc1", 00:10:11.035 "name": "Malloc1", 00:10:11.035 "nguid": "AFAEE8F48E274822AEED212F0295F855", 00:10:11.035 "uuid": "afaee8f4-8e27-4822-aeed-212f0295f855" 00:10:11.035 } 00:10:11.035 ] 00:10:11.035 }, 00:10:11.035 { 00:10:11.035 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:11.035 "subtype": "NVMe", 00:10:11.035 "listen_addresses": [ 00:10:11.035 { 00:10:11.035 "trtype": "VFIOUSER", 00:10:11.035 "adrfam": "IPv4", 00:10:11.035 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:11.035 "trsvcid": "0" 00:10:11.035 } 00:10:11.035 ], 00:10:11.035 "allow_any_host": true, 00:10:11.035 "hosts": [], 00:10:11.035 "serial_number": "SPDK2", 00:10:11.035 "model_number": "SPDK bdev Controller", 00:10:11.035 "max_namespaces": 32, 00:10:11.035 "min_cntlid": 1, 00:10:11.035 "max_cntlid": 65519, 00:10:11.035 "namespaces": [ 00:10:11.035 { 00:10:11.035 "nsid": 1, 00:10:11.036 "bdev_name": "Malloc2", 00:10:11.036 "name": "Malloc2", 00:10:11.036 "nguid": "E825FAE27C044AB79BD4ADCC68258544", 00:10:11.036 "uuid": "e825fae2-7c04-4ab7-9bd4-adcc68258544" 00:10:11.036 } 00:10:11.036 ] 00:10:11.036 } 00:10:11.036 ] 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1460884 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:11.036 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:11.294 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.294 [2024-07-15 16:26:50.774396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:11.551 Malloc3 00:10:11.551 16:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:11.551 [2024-07-15 16:26:51.148071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:11.808 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:11.808 Asynchronous Event Request test 00:10:11.808 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.808 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:11.808 Registering asynchronous event callbacks... 00:10:11.808 Starting namespace attribute notice tests for all controllers... 00:10:11.808 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:11.808 aer_cb - Changed Namespace 00:10:11.808 Cleaning up... 00:10:12.067 [ 00:10:12.067 { 00:10:12.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:12.067 "subtype": "Discovery", 00:10:12.067 "listen_addresses": [], 00:10:12.067 "allow_any_host": true, 00:10:12.067 "hosts": [] 00:10:12.067 }, 00:10:12.067 { 00:10:12.067 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:12.067 "subtype": "NVMe", 00:10:12.067 "listen_addresses": [ 00:10:12.067 { 00:10:12.067 "trtype": "VFIOUSER", 00:10:12.067 "adrfam": "IPv4", 00:10:12.067 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:12.067 "trsvcid": "0" 00:10:12.067 } 00:10:12.067 ], 00:10:12.067 "allow_any_host": true, 00:10:12.067 "hosts": [], 00:10:12.067 "serial_number": "SPDK1", 00:10:12.067 "model_number": "SPDK bdev Controller", 00:10:12.067 "max_namespaces": 32, 00:10:12.067 "min_cntlid": 1, 00:10:12.067 "max_cntlid": 65519, 00:10:12.067 "namespaces": [ 00:10:12.067 { 00:10:12.067 "nsid": 1, 00:10:12.067 "bdev_name": "Malloc1", 00:10:12.067 "name": "Malloc1", 00:10:12.067 "nguid": "AFAEE8F48E274822AEED212F0295F855", 00:10:12.067 "uuid": "afaee8f4-8e27-4822-aeed-212f0295f855" 00:10:12.067 }, 00:10:12.067 { 00:10:12.067 "nsid": 2, 00:10:12.067 "bdev_name": "Malloc3", 00:10:12.067 "name": "Malloc3", 00:10:12.067 "nguid": "60AF95D8164841FB826343C973320167", 00:10:12.067 "uuid": "60af95d8-1648-41fb-8263-43c973320167" 00:10:12.067 } 00:10:12.067 ] 00:10:12.067 }, 00:10:12.067 { 00:10:12.067 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:12.067 "subtype": "NVMe", 00:10:12.067 "listen_addresses": [ 00:10:12.067 { 00:10:12.067 "trtype": "VFIOUSER", 00:10:12.067 "adrfam": "IPv4", 00:10:12.067 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:12.067 "trsvcid": "0" 00:10:12.067 } 00:10:12.067 ], 00:10:12.067 "allow_any_host": true, 00:10:12.067 "hosts": [], 00:10:12.067 "serial_number": "SPDK2", 00:10:12.067 "model_number": "SPDK bdev Controller", 00:10:12.067 "max_namespaces": 32, 00:10:12.067 "min_cntlid": 1, 00:10:12.067 "max_cntlid": 65519, 00:10:12.067 "namespaces": [ 00:10:12.067 { 00:10:12.067 "nsid": 1, 00:10:12.067 "bdev_name": "Malloc2", 00:10:12.067 "name": "Malloc2", 00:10:12.067 "nguid": "E825FAE27C044AB79BD4ADCC68258544", 00:10:12.067 "uuid": "e825fae2-7c04-4ab7-9bd4-adcc68258544" 00:10:12.067 } 00:10:12.067 ] 00:10:12.067 } 00:10:12.067 ] 00:10:12.067 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1460884 00:10:12.067 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:12.067 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:12.067 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:12.067 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:12.067 [2024-07-15 16:26:51.440557] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:12.067 [2024-07-15 16:26:51.440595] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461016 ] 00:10:12.067 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.067 [2024-07-15 16:26:51.473850] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:12.067 [2024-07-15 16:26:51.476181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:12.067 [2024-07-15 16:26:51.476225] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff9b0a00000 00:10:12.067 [2024-07-15 16:26:51.477183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.478191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.479201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.480206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.481219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.482239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.483245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.484246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:12.067 [2024-07-15 16:26:51.485256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:12.067 [2024-07-15 16:26:51.485277] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff9b09f5000 00:10:12.067 [2024-07-15 16:26:51.486390] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:12.067 [2024-07-15 16:26:51.500477] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:12.067 [2024-07-15 16:26:51.500509] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:12.067 [2024-07-15 16:26:51.505604] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:12.067 [2024-07-15 16:26:51.505657] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:12.067 [2024-07-15 16:26:51.505746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:12.067 [2024-07-15 16:26:51.505769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:12.067 [2024-07-15 16:26:51.505779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:12.067 [2024-07-15 16:26:51.506612] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:12.067 [2024-07-15 16:26:51.506632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:12.067 [2024-07-15 16:26:51.506644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:12.067 [2024-07-15 16:26:51.507623] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:12.067 [2024-07-15 16:26:51.507643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:12.068 [2024-07-15 16:26:51.507656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.508627] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:12.068 [2024-07-15 16:26:51.508646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.509630] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:12.068 [2024-07-15 16:26:51.509650] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:12.068 [2024-07-15 16:26:51.509659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.509671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.509780] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:12.068 [2024-07-15 16:26:51.509788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.509796] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:12.068 [2024-07-15 16:26:51.510640] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:12.068 [2024-07-15 16:26:51.511645] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:12.068 [2024-07-15 16:26:51.512650] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:12.068 [2024-07-15 16:26:51.513648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.068 [2024-07-15 16:26:51.513728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:12.068 [2024-07-15 16:26:51.514666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:12.068 [2024-07-15 16:26:51.514686] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:12.068 [2024-07-15 16:26:51.514695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.514718] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:12.068 [2024-07-15 16:26:51.514731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.514752] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:12.068 [2024-07-15 16:26:51.514761] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:12.068 [2024-07-15 16:26:51.514781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.522891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.522914] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:12.068 [2024-07-15 16:26:51.522928] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:12.068 [2024-07-15 16:26:51.522936] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:12.068 [2024-07-15 16:26:51.522944] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:12.068 [2024-07-15 16:26:51.522952] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:12.068 [2024-07-15 16:26:51.522960] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:12.068 [2024-07-15 16:26:51.522968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.522981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.522997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.530889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.530917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.068 [2024-07-15 16:26:51.530932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.068 [2024-07-15 16:26:51.530945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.068 [2024-07-15 16:26:51.530957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:12.068 [2024-07-15 16:26:51.530966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.530981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.530997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.538901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.538924] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:12.068 [2024-07-15 16:26:51.538934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.538945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.538955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.538969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.546890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.546991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.547008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.547021] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:12.068 [2024-07-15 16:26:51.547030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:12.068 [2024-07-15 16:26:51.547040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.554887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.554919] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:12.068 [2024-07-15 16:26:51.554939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.554954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.554968] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:12.068 [2024-07-15 16:26:51.554976] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:12.068 [2024-07-15 16:26:51.554986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.562889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.562917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.562932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.562946] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:12.068 [2024-07-15 16:26:51.562955] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:12.068 [2024-07-15 16:26:51.562964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.570887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.570909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570975] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:12.068 [2024-07-15 16:26:51.570983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:12.068 [2024-07-15 16:26:51.570991] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:12.068 [2024-07-15 16:26:51.571016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.578889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.578916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.586902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.586927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.594886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.594911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:12.068 [2024-07-15 16:26:51.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:12.068 [2024-07-15 16:26:51.602924] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:12.068 [2024-07-15 16:26:51.602935] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:12.069 [2024-07-15 16:26:51.602941] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:12.069 [2024-07-15 16:26:51.602947] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:12.069 [2024-07-15 16:26:51.602957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:12.069 [2024-07-15 16:26:51.602969] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:12.069 [2024-07-15 16:26:51.602977] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:12.069 [2024-07-15 16:26:51.602986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:12.069 [2024-07-15 16:26:51.602997] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:12.069 [2024-07-15 16:26:51.603004] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:12.069 [2024-07-15 16:26:51.603013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:12.069 [2024-07-15 16:26:51.603025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:12.069 [2024-07-15 16:26:51.603032] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:12.069 [2024-07-15 16:26:51.603041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:12.069 [2024-07-15 16:26:51.610889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:12.069 [2024-07-15 16:26:51.610916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:12.069 [2024-07-15 16:26:51.610939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:12.069 [2024-07-15 16:26:51.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:12.069 ===================================================== 00:10:12.069 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:12.069 ===================================================== 00:10:12.069 Controller Capabilities/Features 00:10:12.069 ================================ 00:10:12.069 Vendor ID: 4e58 00:10:12.069 Subsystem Vendor ID: 4e58 00:10:12.069 Serial Number: SPDK2 00:10:12.069 Model Number: SPDK bdev Controller 00:10:12.069 Firmware Version: 24.09 00:10:12.069 Recommended Arb Burst: 6 00:10:12.069 IEEE OUI Identifier: 8d 6b 50 00:10:12.069 Multi-path I/O 00:10:12.069 May have multiple subsystem ports: Yes 00:10:12.069 May have multiple controllers: Yes 00:10:12.069 Associated with SR-IOV VF: No 00:10:12.069 Max Data Transfer Size: 131072 00:10:12.069 Max Number of Namespaces: 32 00:10:12.069 Max Number of I/O Queues: 127 00:10:12.069 NVMe Specification Version (VS): 1.3 00:10:12.069 NVMe Specification Version (Identify): 1.3 00:10:12.069 Maximum Queue Entries: 256 00:10:12.069 Contiguous Queues Required: Yes 00:10:12.069 Arbitration Mechanisms Supported 00:10:12.069 Weighted Round Robin: Not Supported 00:10:12.069 Vendor Specific: Not Supported 00:10:12.069 Reset Timeout: 15000 ms 00:10:12.069 Doorbell Stride: 4 bytes 00:10:12.069 NVM Subsystem Reset: Not Supported 00:10:12.069 Command Sets Supported 00:10:12.069 NVM Command Set: Supported 00:10:12.069 Boot Partition: Not Supported 00:10:12.069 Memory Page Size Minimum: 4096 bytes 00:10:12.069 Memory Page Size Maximum: 4096 bytes 00:10:12.069 Persistent Memory Region: Not Supported 00:10:12.069 Optional Asynchronous Events Supported 00:10:12.069 Namespace Attribute Notices: Supported 00:10:12.069 Firmware Activation Notices: Not Supported 00:10:12.069 ANA Change Notices: Not Supported 00:10:12.069 PLE Aggregate Log Change Notices: Not Supported 00:10:12.069 LBA Status Info Alert Notices: Not Supported 00:10:12.069 EGE Aggregate Log Change Notices: Not Supported 00:10:12.069 Normal NVM Subsystem Shutdown event: Not Supported 00:10:12.069 Zone Descriptor Change Notices: Not Supported 00:10:12.069 Discovery Log Change Notices: Not Supported 00:10:12.069 Controller Attributes 00:10:12.069 128-bit Host Identifier: Supported 00:10:12.069 Non-Operational Permissive Mode: Not Supported 00:10:12.069 NVM Sets: Not Supported 00:10:12.069 Read Recovery Levels: Not Supported 00:10:12.069 Endurance Groups: Not Supported 00:10:12.069 Predictable Latency Mode: Not Supported 00:10:12.069 Traffic Based Keep ALive: Not Supported 00:10:12.069 Namespace Granularity: Not Supported 00:10:12.069 SQ Associations: Not Supported 00:10:12.069 UUID List: Not Supported 00:10:12.069 Multi-Domain Subsystem: Not Supported 00:10:12.069 Fixed Capacity Management: Not Supported 00:10:12.069 Variable Capacity Management: Not Supported 00:10:12.069 Delete Endurance Group: Not Supported 00:10:12.069 Delete NVM Set: Not Supported 00:10:12.069 Extended LBA Formats Supported: Not Supported 00:10:12.069 Flexible Data Placement Supported: Not Supported 00:10:12.069 00:10:12.069 Controller Memory Buffer Support 00:10:12.069 ================================ 00:10:12.069 Supported: No 00:10:12.069 00:10:12.069 Persistent Memory Region Support 00:10:12.069 ================================ 00:10:12.069 Supported: No 00:10:12.069 00:10:12.069 Admin Command Set Attributes 00:10:12.069 ============================ 00:10:12.069 Security Send/Receive: Not Supported 00:10:12.069 Format NVM: Not Supported 00:10:12.069 Firmware Activate/Download: Not Supported 00:10:12.069 Namespace Management: Not Supported 00:10:12.069 Device Self-Test: Not Supported 00:10:12.069 Directives: Not Supported 00:10:12.069 NVMe-MI: Not Supported 00:10:12.069 Virtualization Management: Not Supported 00:10:12.069 Doorbell Buffer Config: Not Supported 00:10:12.069 Get LBA Status Capability: Not Supported 00:10:12.069 Command & Feature Lockdown Capability: Not Supported 00:10:12.069 Abort Command Limit: 4 00:10:12.069 Async Event Request Limit: 4 00:10:12.069 Number of Firmware Slots: N/A 00:10:12.069 Firmware Slot 1 Read-Only: N/A 00:10:12.069 Firmware Activation Without Reset: N/A 00:10:12.069 Multiple Update Detection Support: N/A 00:10:12.069 Firmware Update Granularity: No Information Provided 00:10:12.069 Per-Namespace SMART Log: No 00:10:12.069 Asymmetric Namespace Access Log Page: Not Supported 00:10:12.069 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:12.069 Command Effects Log Page: Supported 00:10:12.069 Get Log Page Extended Data: Supported 00:10:12.069 Telemetry Log Pages: Not Supported 00:10:12.069 Persistent Event Log Pages: Not Supported 00:10:12.069 Supported Log Pages Log Page: May Support 00:10:12.069 Commands Supported & Effects Log Page: Not Supported 00:10:12.069 Feature Identifiers & Effects Log Page:May Support 00:10:12.069 NVMe-MI Commands & Effects Log Page: May Support 00:10:12.069 Data Area 4 for Telemetry Log: Not Supported 00:10:12.069 Error Log Page Entries Supported: 128 00:10:12.069 Keep Alive: Supported 00:10:12.069 Keep Alive Granularity: 10000 ms 00:10:12.069 00:10:12.069 NVM Command Set Attributes 00:10:12.069 ========================== 00:10:12.069 Submission Queue Entry Size 00:10:12.069 Max: 64 00:10:12.069 Min: 64 00:10:12.069 Completion Queue Entry Size 00:10:12.069 Max: 16 00:10:12.069 Min: 16 00:10:12.069 Number of Namespaces: 32 00:10:12.069 Compare Command: Supported 00:10:12.069 Write Uncorrectable Command: Not Supported 00:10:12.069 Dataset Management Command: Supported 00:10:12.069 Write Zeroes Command: Supported 00:10:12.069 Set Features Save Field: Not Supported 00:10:12.069 Reservations: Not Supported 00:10:12.069 Timestamp: Not Supported 00:10:12.069 Copy: Supported 00:10:12.069 Volatile Write Cache: Present 00:10:12.069 Atomic Write Unit (Normal): 1 00:10:12.069 Atomic Write Unit (PFail): 1 00:10:12.069 Atomic Compare & Write Unit: 1 00:10:12.069 Fused Compare & Write: Supported 00:10:12.069 Scatter-Gather List 00:10:12.069 SGL Command Set: Supported (Dword aligned) 00:10:12.069 SGL Keyed: Not Supported 00:10:12.069 SGL Bit Bucket Descriptor: Not Supported 00:10:12.069 SGL Metadata Pointer: Not Supported 00:10:12.069 Oversized SGL: Not Supported 00:10:12.069 SGL Metadata Address: Not Supported 00:10:12.069 SGL Offset: Not Supported 00:10:12.069 Transport SGL Data Block: Not Supported 00:10:12.069 Replay Protected Memory Block: Not Supported 00:10:12.069 00:10:12.069 Firmware Slot Information 00:10:12.069 ========================= 00:10:12.069 Active slot: 1 00:10:12.069 Slot 1 Firmware Revision: 24.09 00:10:12.069 00:10:12.069 00:10:12.069 Commands Supported and Effects 00:10:12.069 ============================== 00:10:12.069 Admin Commands 00:10:12.069 -------------- 00:10:12.069 Get Log Page (02h): Supported 00:10:12.069 Identify (06h): Supported 00:10:12.069 Abort (08h): Supported 00:10:12.069 Set Features (09h): Supported 00:10:12.069 Get Features (0Ah): Supported 00:10:12.069 Asynchronous Event Request (0Ch): Supported 00:10:12.069 Keep Alive (18h): Supported 00:10:12.069 I/O Commands 00:10:12.069 ------------ 00:10:12.069 Flush (00h): Supported LBA-Change 00:10:12.069 Write (01h): Supported LBA-Change 00:10:12.069 Read (02h): Supported 00:10:12.069 Compare (05h): Supported 00:10:12.069 Write Zeroes (08h): Supported LBA-Change 00:10:12.069 Dataset Management (09h): Supported LBA-Change 00:10:12.069 Copy (19h): Supported LBA-Change 00:10:12.069 00:10:12.069 Error Log 00:10:12.069 ========= 00:10:12.069 00:10:12.069 Arbitration 00:10:12.069 =========== 00:10:12.069 Arbitration Burst: 1 00:10:12.069 00:10:12.069 Power Management 00:10:12.070 ================ 00:10:12.070 Number of Power States: 1 00:10:12.070 Current Power State: Power State #0 00:10:12.070 Power State #0: 00:10:12.070 Max Power: 0.00 W 00:10:12.070 Non-Operational State: Operational 00:10:12.070 Entry Latency: Not Reported 00:10:12.070 Exit Latency: Not Reported 00:10:12.070 Relative Read Throughput: 0 00:10:12.070 Relative Read Latency: 0 00:10:12.070 Relative Write Throughput: 0 00:10:12.070 Relative Write Latency: 0 00:10:12.070 Idle Power: Not Reported 00:10:12.070 Active Power: Not Reported 00:10:12.070 Non-Operational Permissive Mode: Not Supported 00:10:12.070 00:10:12.070 Health Information 00:10:12.070 ================== 00:10:12.070 Critical Warnings: 00:10:12.070 Available Spare Space: OK 00:10:12.070 Temperature: OK 00:10:12.070 Device Reliability: OK 00:10:12.070 Read Only: No 00:10:12.070 Volatile Memory Backup: OK 00:10:12.070 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:12.070 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:12.070 Available Spare: 0% 00:10:12.070 Available Sp[2024-07-15 16:26:51.611075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:12.070 [2024-07-15 16:26:51.618885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:12.070 [2024-07-15 16:26:51.618940] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:12.070 [2024-07-15 16:26:51.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.070 [2024-07-15 16:26:51.618969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.070 [2024-07-15 16:26:51.618980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.070 [2024-07-15 16:26:51.618990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:12.070 [2024-07-15 16:26:51.619053] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:12.070 [2024-07-15 16:26:51.619074] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:12.070 [2024-07-15 16:26:51.620058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.070 [2024-07-15 16:26:51.620143] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:12.070 [2024-07-15 16:26:51.620174] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:12.070 [2024-07-15 16:26:51.621064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:12.070 [2024-07-15 16:26:51.621088] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:12.070 [2024-07-15 16:26:51.621140] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:12.070 [2024-07-15 16:26:51.622329] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:12.327 are Threshold: 0% 00:10:12.327 Life Percentage Used: 0% 00:10:12.327 Data Units Read: 0 00:10:12.327 Data Units Written: 0 00:10:12.327 Host Read Commands: 0 00:10:12.327 Host Write Commands: 0 00:10:12.327 Controller Busy Time: 0 minutes 00:10:12.327 Power Cycles: 0 00:10:12.327 Power On Hours: 0 hours 00:10:12.327 Unsafe Shutdowns: 0 00:10:12.327 Unrecoverable Media Errors: 0 00:10:12.327 Lifetime Error Log Entries: 0 00:10:12.327 Warning Temperature Time: 0 minutes 00:10:12.327 Critical Temperature Time: 0 minutes 00:10:12.327 00:10:12.327 Number of Queues 00:10:12.327 ================ 00:10:12.327 Number of I/O Submission Queues: 127 00:10:12.327 Number of I/O Completion Queues: 127 00:10:12.327 00:10:12.327 Active Namespaces 00:10:12.327 ================= 00:10:12.327 Namespace ID:1 00:10:12.327 Error Recovery Timeout: Unlimited 00:10:12.327 Command Set Identifier: NVM (00h) 00:10:12.327 Deallocate: Supported 00:10:12.327 Deallocated/Unwritten Error: Not Supported 00:10:12.327 Deallocated Read Value: Unknown 00:10:12.327 Deallocate in Write Zeroes: Not Supported 00:10:12.327 Deallocated Guard Field: 0xFFFF 00:10:12.327 Flush: Supported 00:10:12.327 Reservation: Supported 00:10:12.327 Namespace Sharing Capabilities: Multiple Controllers 00:10:12.327 Size (in LBAs): 131072 (0GiB) 00:10:12.327 Capacity (in LBAs): 131072 (0GiB) 00:10:12.327 Utilization (in LBAs): 131072 (0GiB) 00:10:12.327 NGUID: E825FAE27C044AB79BD4ADCC68258544 00:10:12.327 UUID: e825fae2-7c04-4ab7-9bd4-adcc68258544 00:10:12.327 Thin Provisioning: Not Supported 00:10:12.327 Per-NS Atomic Units: Yes 00:10:12.327 Atomic Boundary Size (Normal): 0 00:10:12.328 Atomic Boundary Size (PFail): 0 00:10:12.328 Atomic Boundary Offset: 0 00:10:12.328 Maximum Single Source Range Length: 65535 00:10:12.328 Maximum Copy Length: 65535 00:10:12.328 Maximum Source Range Count: 1 00:10:12.328 NGUID/EUI64 Never Reused: No 00:10:12.328 Namespace Write Protected: No 00:10:12.328 Number of LBA Formats: 1 00:10:12.328 Current LBA Format: LBA Format #00 00:10:12.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.328 00:10:12.328 16:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:12.328 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.328 [2024-07-15 16:26:51.850847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.591 Initializing NVMe Controllers 00:10:17.591 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:17.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:17.591 Initialization complete. Launching workers. 00:10:17.591 ======================================================== 00:10:17.591 Latency(us) 00:10:17.591 Device Information : IOPS MiB/s Average min max 00:10:17.591 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35153.45 137.32 3640.57 1137.49 9257.73 00:10:17.591 ======================================================== 00:10:17.591 Total : 35153.45 137.32 3640.57 1137.49 9257.73 00:10:17.591 00:10:17.591 [2024-07-15 16:26:56.957247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.591 16:26:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:17.591 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.849 [2024-07-15 16:26:57.199905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:23.169 Initializing NVMe Controllers 00:10:23.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:23.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:23.169 Initialization complete. Launching workers. 00:10:23.169 ======================================================== 00:10:23.169 Latency(us) 00:10:23.169 Device Information : IOPS MiB/s Average min max 00:10:23.169 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31454.97 122.87 4068.96 1191.05 7869.33 00:10:23.169 ======================================================== 00:10:23.169 Total : 31454.97 122.87 4068.96 1191.05 7869.33 00:10:23.169 00:10:23.169 [2024-07-15 16:27:02.221649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:23.169 16:27:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:23.169 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.169 [2024-07-15 16:27:02.422916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:28.439 [2024-07-15 16:27:07.561010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:28.439 Initializing NVMe Controllers 00:10:28.439 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:28.439 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:28.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:28.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:28.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:28.439 Initialization complete. Launching workers. 00:10:28.439 Starting thread on core 2 00:10:28.439 Starting thread on core 3 00:10:28.439 Starting thread on core 1 00:10:28.439 16:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:28.439 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.439 [2024-07-15 16:27:07.867377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.723 [2024-07-15 16:27:11.013188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.723 Initializing NVMe Controllers 00:10:31.723 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.723 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:31.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:31.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:31.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:31.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:31.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:31.724 Initialization complete. Launching workers. 00:10:31.724 Starting thread on core 1 with urgent priority queue 00:10:31.724 Starting thread on core 2 with urgent priority queue 00:10:31.724 Starting thread on core 3 with urgent priority queue 00:10:31.724 Starting thread on core 0 with urgent priority queue 00:10:31.724 SPDK bdev Controller (SPDK2 ) core 0: 3520.67 IO/s 28.40 secs/100000 ios 00:10:31.724 SPDK bdev Controller (SPDK2 ) core 1: 3627.00 IO/s 27.57 secs/100000 ios 00:10:31.724 SPDK bdev Controller (SPDK2 ) core 2: 3954.00 IO/s 25.29 secs/100000 ios 00:10:31.724 SPDK bdev Controller (SPDK2 ) core 3: 3170.67 IO/s 31.54 secs/100000 ios 00:10:31.724 ======================================================== 00:10:31.724 00:10:31.724 16:27:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:31.724 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.724 [2024-07-15 16:27:11.317581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.982 Initializing NVMe Controllers 00:10:31.982 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.982 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.982 Namespace ID: 1 size: 0GB 00:10:31.982 Initialization complete. 00:10:31.982 INFO: using host memory buffer for IO 00:10:31.982 Hello world! 00:10:31.982 [2024-07-15 16:27:11.327655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.982 16:27:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:31.982 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.241 [2024-07-15 16:27:11.627261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:33.182 Initializing NVMe Controllers 00:10:33.182 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.182 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:33.182 Initialization complete. Launching workers. 00:10:33.182 submit (in ns) avg, min, max = 8135.4, 3510.0, 4017275.6 00:10:33.182 complete (in ns) avg, min, max = 25981.6, 2055.6, 4017147.8 00:10:33.182 00:10:33.182 Submit histogram 00:10:33.182 ================ 00:10:33.182 Range in us Cumulative Count 00:10:33.182 3.508 - 3.532: 0.0964% ( 13) 00:10:33.182 3.532 - 3.556: 0.3932% ( 40) 00:10:33.182 3.556 - 3.579: 1.8101% ( 191) 00:10:33.182 3.579 - 3.603: 5.2522% ( 464) 00:10:33.182 3.603 - 3.627: 11.1721% ( 798) 00:10:33.182 3.627 - 3.650: 19.1098% ( 1070) 00:10:33.182 3.650 - 3.674: 29.9407% ( 1460) 00:10:33.182 3.674 - 3.698: 38.8650% ( 1203) 00:10:33.182 3.698 - 3.721: 47.4703% ( 1160) 00:10:33.182 3.721 - 3.745: 52.6706% ( 701) 00:10:33.182 3.745 - 3.769: 57.0104% ( 585) 00:10:33.182 3.769 - 3.793: 60.8902% ( 523) 00:10:33.182 3.793 - 3.816: 64.4214% ( 476) 00:10:33.182 3.816 - 3.840: 67.6706% ( 438) 00:10:33.182 3.840 - 3.864: 71.3947% ( 502) 00:10:33.182 3.864 - 3.887: 75.3042% ( 527) 00:10:33.182 3.887 - 3.911: 79.3620% ( 547) 00:10:33.182 3.911 - 3.935: 82.9896% ( 489) 00:10:33.182 3.935 - 3.959: 85.6825% ( 363) 00:10:33.182 3.959 - 3.982: 87.7003% ( 272) 00:10:33.182 3.982 - 4.006: 89.3917% ( 228) 00:10:33.182 4.006 - 4.030: 90.5415% ( 155) 00:10:33.182 4.030 - 4.053: 91.5208% ( 132) 00:10:33.182 4.053 - 4.077: 92.3887% ( 117) 00:10:33.182 4.077 - 4.101: 93.3383% ( 128) 00:10:33.182 4.101 - 4.124: 94.1543% ( 110) 00:10:33.182 4.124 - 4.148: 94.7997% ( 87) 00:10:33.182 4.148 - 4.172: 95.2671% ( 63) 00:10:33.182 4.172 - 4.196: 95.6306% ( 49) 00:10:33.182 4.196 - 4.219: 95.9570% ( 44) 00:10:33.182 4.219 - 4.243: 96.1202% ( 22) 00:10:33.182 4.243 - 4.267: 96.2685% ( 20) 00:10:33.182 4.267 - 4.290: 96.4392% ( 23) 00:10:33.182 4.290 - 4.314: 96.5208% ( 11) 00:10:33.182 4.314 - 4.338: 96.6320% ( 15) 00:10:33.182 4.338 - 4.361: 96.7582% ( 17) 00:10:33.182 4.361 - 4.385: 96.8398% ( 11) 00:10:33.182 4.385 - 4.409: 96.8917% ( 7) 00:10:33.182 4.409 - 4.433: 96.9362% ( 6) 00:10:33.182 4.433 - 4.456: 97.0030% ( 9) 00:10:33.182 4.456 - 4.480: 97.0104% ( 1) 00:10:33.182 4.480 - 4.504: 97.0178% ( 1) 00:10:33.182 4.504 - 4.527: 97.0401% ( 3) 00:10:33.182 4.527 - 4.551: 97.0475% ( 1) 00:10:33.182 4.551 - 4.575: 97.0549% ( 1) 00:10:33.182 4.575 - 4.599: 97.0697% ( 2) 00:10:33.182 4.599 - 4.622: 97.0772% ( 1) 00:10:33.182 4.622 - 4.646: 97.0994% ( 3) 00:10:33.182 4.646 - 4.670: 97.1068% ( 1) 00:10:33.182 4.670 - 4.693: 97.1142% ( 1) 00:10:33.182 4.693 - 4.717: 97.1291% ( 2) 00:10:33.182 4.741 - 4.764: 97.1365% ( 1) 00:10:33.182 4.764 - 4.788: 97.1513% ( 2) 00:10:33.182 4.812 - 4.836: 97.1810% ( 4) 00:10:33.182 4.836 - 4.859: 97.2107% ( 4) 00:10:33.182 4.859 - 4.883: 97.2626% ( 7) 00:10:33.182 4.883 - 4.907: 97.2997% ( 5) 00:10:33.182 4.907 - 4.930: 97.3516% ( 7) 00:10:33.182 4.930 - 4.954: 97.3887% ( 5) 00:10:33.182 4.954 - 4.978: 97.4184% ( 4) 00:10:33.182 4.978 - 5.001: 97.4629% ( 6) 00:10:33.182 5.001 - 5.025: 97.4926% ( 4) 00:10:33.182 5.025 - 5.049: 97.5445% ( 7) 00:10:33.182 5.049 - 5.073: 97.6187% ( 10) 00:10:33.182 5.073 - 5.096: 97.6632% ( 6) 00:10:33.182 5.096 - 5.120: 97.6855% ( 3) 00:10:33.182 5.120 - 5.144: 97.7226% ( 5) 00:10:33.182 5.144 - 5.167: 97.7671% ( 6) 00:10:33.182 5.167 - 5.191: 97.8042% ( 5) 00:10:33.182 5.191 - 5.215: 97.8561% ( 7) 00:10:33.182 5.215 - 5.239: 97.8932% ( 5) 00:10:33.182 5.239 - 5.262: 97.9154% ( 3) 00:10:33.182 5.262 - 5.286: 97.9228% ( 1) 00:10:33.182 5.286 - 5.310: 97.9303% ( 1) 00:10:33.182 5.310 - 5.333: 97.9525% ( 3) 00:10:33.182 5.333 - 5.357: 97.9599% ( 1) 00:10:33.182 5.357 - 5.381: 97.9674% ( 1) 00:10:33.182 5.381 - 5.404: 97.9822% ( 2) 00:10:33.182 5.404 - 5.428: 97.9970% ( 2) 00:10:33.182 5.428 - 5.452: 98.0045% ( 1) 00:10:33.182 5.523 - 5.547: 98.0119% ( 1) 00:10:33.182 5.547 - 5.570: 98.0267% ( 2) 00:10:33.182 5.618 - 5.641: 98.0341% ( 1) 00:10:33.182 5.807 - 5.831: 98.0490% ( 2) 00:10:33.182 5.879 - 5.902: 98.0564% ( 1) 00:10:33.182 5.902 - 5.926: 98.0712% ( 2) 00:10:33.182 5.950 - 5.973: 98.0935% ( 3) 00:10:33.182 5.973 - 5.997: 98.1009% ( 1) 00:10:33.182 6.044 - 6.068: 98.1083% ( 1) 00:10:33.182 6.068 - 6.116: 98.1157% ( 1) 00:10:33.182 6.116 - 6.163: 98.1231% ( 1) 00:10:33.182 6.305 - 6.353: 98.1380% ( 2) 00:10:33.182 6.542 - 6.590: 98.1454% ( 1) 00:10:33.182 6.637 - 6.684: 98.1528% ( 1) 00:10:33.182 6.684 - 6.732: 98.1602% ( 1) 00:10:33.182 6.779 - 6.827: 98.1751% ( 2) 00:10:33.182 6.827 - 6.874: 98.1825% ( 1) 00:10:33.182 6.921 - 6.969: 98.1973% ( 2) 00:10:33.182 7.159 - 7.206: 98.2047% ( 1) 00:10:33.182 7.348 - 7.396: 98.2122% ( 1) 00:10:33.182 7.396 - 7.443: 98.2196% ( 1) 00:10:33.182 7.443 - 7.490: 98.2344% ( 2) 00:10:33.182 7.490 - 7.538: 98.2493% ( 2) 00:10:33.182 7.680 - 7.727: 98.2567% ( 1) 00:10:33.182 7.727 - 7.775: 98.2641% ( 1) 00:10:33.182 7.870 - 7.917: 98.2715% ( 1) 00:10:33.182 7.964 - 8.012: 98.2789% ( 1) 00:10:33.182 8.012 - 8.059: 98.2864% ( 1) 00:10:33.182 8.059 - 8.107: 98.2938% ( 1) 00:10:33.182 8.154 - 8.201: 98.3086% ( 2) 00:10:33.182 8.249 - 8.296: 98.3160% ( 1) 00:10:33.182 8.344 - 8.391: 98.3309% ( 2) 00:10:33.182 8.391 - 8.439: 98.3457% ( 2) 00:10:33.182 8.439 - 8.486: 98.3531% ( 1) 00:10:33.182 8.533 - 8.581: 98.3605% ( 1) 00:10:33.182 8.581 - 8.628: 98.3828% ( 3) 00:10:33.182 8.676 - 8.723: 98.3902% ( 1) 00:10:33.182 8.723 - 8.770: 98.4050% ( 2) 00:10:33.182 8.770 - 8.818: 98.4125% ( 1) 00:10:33.182 8.818 - 8.865: 98.4347% ( 3) 00:10:33.182 8.913 - 8.960: 98.4421% ( 1) 00:10:33.182 8.960 - 9.007: 98.4570% ( 2) 00:10:33.182 9.007 - 9.055: 98.4644% ( 1) 00:10:33.182 9.055 - 9.102: 98.4718% ( 1) 00:10:33.182 9.197 - 9.244: 98.4866% ( 2) 00:10:33.182 9.434 - 9.481: 98.5015% ( 2) 00:10:33.182 9.481 - 9.529: 98.5163% ( 2) 00:10:33.182 9.576 - 9.624: 98.5312% ( 2) 00:10:33.182 9.719 - 9.766: 98.5386% ( 1) 00:10:33.182 9.766 - 9.813: 98.5460% ( 1) 00:10:33.182 9.861 - 9.908: 98.5608% ( 2) 00:10:33.182 9.908 - 9.956: 98.5831% ( 3) 00:10:33.182 10.050 - 10.098: 98.5905% ( 1) 00:10:33.182 10.145 - 10.193: 98.5979% ( 1) 00:10:33.182 10.193 - 10.240: 98.6128% ( 2) 00:10:33.182 10.240 - 10.287: 98.6276% ( 2) 00:10:33.182 10.287 - 10.335: 98.6424% ( 2) 00:10:33.182 10.335 - 10.382: 98.6499% ( 1) 00:10:33.182 10.382 - 10.430: 98.6647% ( 2) 00:10:33.182 10.430 - 10.477: 98.6795% ( 2) 00:10:33.182 10.477 - 10.524: 98.6869% ( 1) 00:10:33.182 10.524 - 10.572: 98.6944% ( 1) 00:10:33.182 10.667 - 10.714: 98.7018% ( 1) 00:10:33.182 11.141 - 11.188: 98.7092% ( 1) 00:10:33.182 11.236 - 11.283: 98.7166% ( 1) 00:10:33.182 11.330 - 11.378: 98.7240% ( 1) 00:10:33.182 11.473 - 11.520: 98.7315% ( 1) 00:10:33.182 11.520 - 11.567: 98.7389% ( 1) 00:10:33.182 11.615 - 11.662: 98.7463% ( 1) 00:10:33.182 11.662 - 11.710: 98.7685% ( 3) 00:10:33.182 11.710 - 11.757: 98.7760% ( 1) 00:10:33.182 11.899 - 11.947: 98.7908% ( 2) 00:10:33.182 11.947 - 11.994: 98.7982% ( 1) 00:10:33.182 11.994 - 12.041: 98.8131% ( 2) 00:10:33.182 12.041 - 12.089: 98.8205% ( 1) 00:10:33.182 12.136 - 12.231: 98.8279% ( 1) 00:10:33.182 12.421 - 12.516: 98.8353% ( 1) 00:10:33.182 12.516 - 12.610: 98.8427% ( 1) 00:10:33.182 12.610 - 12.705: 98.8650% ( 3) 00:10:33.182 12.705 - 12.800: 98.8724% ( 1) 00:10:33.182 12.800 - 12.895: 98.8872% ( 2) 00:10:33.182 12.895 - 12.990: 98.9021% ( 2) 00:10:33.182 13.179 - 13.274: 98.9095% ( 1) 00:10:33.182 13.274 - 13.369: 98.9243% ( 2) 00:10:33.182 13.369 - 13.464: 98.9318% ( 1) 00:10:33.182 13.938 - 14.033: 98.9392% ( 1) 00:10:33.182 14.033 - 14.127: 98.9466% ( 1) 00:10:33.182 14.127 - 14.222: 98.9614% ( 2) 00:10:33.182 14.222 - 14.317: 98.9688% ( 1) 00:10:33.182 14.317 - 14.412: 98.9837% ( 2) 00:10:33.182 14.412 - 14.507: 98.9911% ( 1) 00:10:33.182 14.696 - 14.791: 98.9985% ( 1) 00:10:33.182 17.067 - 17.161: 99.0134% ( 2) 00:10:33.182 17.161 - 17.256: 99.0282% ( 2) 00:10:33.182 17.256 - 17.351: 99.0504% ( 3) 00:10:33.182 17.351 - 17.446: 99.0727% ( 3) 00:10:33.182 17.446 - 17.541: 99.1024% ( 4) 00:10:33.182 17.541 - 17.636: 99.1246% ( 3) 00:10:33.182 17.636 - 17.730: 99.1469% ( 3) 00:10:33.182 17.730 - 17.825: 99.2062% ( 8) 00:10:33.182 17.825 - 17.920: 99.2582% ( 7) 00:10:33.182 17.920 - 18.015: 99.3249% ( 9) 00:10:33.182 18.015 - 18.110: 99.3843% ( 8) 00:10:33.182 18.110 - 18.204: 99.4214% ( 5) 00:10:33.182 18.204 - 18.299: 99.5252% ( 14) 00:10:33.182 18.299 - 18.394: 99.5920% ( 9) 00:10:33.182 18.394 - 18.489: 99.6736% ( 11) 00:10:33.182 18.489 - 18.584: 99.7552% ( 11) 00:10:33.182 18.584 - 18.679: 99.7849% ( 4) 00:10:33.182 18.679 - 18.773: 99.7997% ( 2) 00:10:33.182 18.773 - 18.868: 99.8071% ( 1) 00:10:33.182 18.868 - 18.963: 99.8220% ( 2) 00:10:33.182 18.963 - 19.058: 99.8294% ( 1) 00:10:33.182 21.144 - 21.239: 99.8368% ( 1) 00:10:33.182 21.523 - 21.618: 99.8442% ( 1) 00:10:33.182 22.945 - 23.040: 99.8516% ( 1) 00:10:33.182 23.230 - 23.324: 99.8591% ( 1) 00:10:33.182 26.548 - 26.738: 99.8665% ( 1) 00:10:33.182 28.065 - 28.255: 99.8739% ( 1) 00:10:33.182 28.444 - 28.634: 99.8813% ( 1) 00:10:33.182 28.634 - 28.824: 99.8887% ( 1) 00:10:33.182 34.133 - 34.323: 99.8961% ( 1) 00:10:33.183 3980.705 - 4004.978: 99.9629% ( 9) 00:10:33.183 4004.978 - 4029.250: 100.0000% ( 5) 00:10:33.183 00:10:33.183 Complete histogram 00:10:33.183 ================== 00:10:33.183 Range in us Cumulative Count 00:10:33.183 2.050 - 2.062: 2.5668% ( 346) 00:10:33.183 2.062 - 2.074: 29.9036% ( 3685) 00:10:33.183 2.074 - 2.086: 35.8160% ( 797) 00:10:33.183 2.086 - 2.098: 43.2344% ( 1000) 00:10:33.183 2.098 - 2.110: 54.8442% ( 1565) 00:10:33.183 2.110 - 2.121: 57.6780% ( 382) 00:10:33.183 2.121 - 2.133: 63.0786% ( 728) 00:10:33.183 2.133 - 2.145: 71.2092% ( 1096) 00:10:33.183 2.145 - 2.157: 73.1825% ( 266) 00:10:33.183 2.157 - 2.169: 77.0475% ( 521) 00:10:33.183 2.169 - 2.181: 80.3487% ( 445) 00:10:33.183 2.181 - 2.193: 81.4911% ( 154) 00:10:33.183 2.193 - 2.204: 83.4718% ( 267) 00:10:33.183 2.204 - 2.216: 86.5950% ( 421) 00:10:33.183 2.216 - 2.228: 87.2849% ( 93) 00:10:33.183 2.228 - 2.240: 89.0579% ( 239) 00:10:33.183 2.240 - 2.252: 92.1439% ( 416) 00:10:33.183 2.252 - 2.264: 93.0119% ( 117) 00:10:33.183 2.264 - 2.276: 93.4718% ( 62) 00:10:33.183 2.276 - 2.287: 93.7463% ( 37) 00:10:33.183 2.287 - 2.299: 93.9243% ( 24) 00:10:33.183 2.299 - 2.311: 94.2656% ( 46) 00:10:33.183 2.311 - 2.323: 94.9703% ( 95) 00:10:33.183 2.323 - 2.335: 95.3042% ( 45) 00:10:33.183 2.335 - 2.347: 95.4154% ( 15) 00:10:33.183 2.347 - 2.359: 95.5119% ( 13) 00:10:33.183 2.359 - 2.370: 95.5638% ( 7) 00:10:33.183 2.370 - 2.382: 95.6751% ( 15) 00:10:33.183 2.382 - 2.394: 95.8531% ( 24) 00:10:33.183 2.394 - 2.406: 96.1053% ( 34) 00:10:33.183 2.406 - 2.418: 96.3056% ( 27) 00:10:33.183 2.418 - 2.430: 96.5356% ( 31) 00:10:33.183 2.430 - 2.441: 96.6914% ( 21) 00:10:33.183 2.441 - 2.453: 96.8546% ( 22) 00:10:33.183 2.453 - 2.465: 97.0475% ( 26) 00:10:33.183 2.465 - 2.477: 97.1958% ( 20) 00:10:33.183 2.477 - 2.489: 97.3887% ( 26) 00:10:33.183 2.489 - 2.501: 97.4926% ( 14) 00:10:33.183 2.501 - 2.513: 97.7077% ( 29) 00:10:33.183 2.513 - 2.524: 97.8264% ( 16) 00:10:33.183 2.524 - 2.536: 97.9377% ( 15) 00:10:33.183 2.536 - 2.548: 98.0119% ( 10) 00:10:33.183 2.548 - 2.560: 98.0564% ( 6) 00:10:33.183 2.560 - 2.572: 98.1009% ( 6) 00:10:33.183 2.572 - 2.584: 98.1602% ( 8) 00:10:33.183 2.584 - 2.596: 98.2196% ( 8) 00:10:33.183 2.596 - 2.607: 98.2938% ( 10) 00:10:33.183 2.607 - 2.619: 9[2024-07-15 16:27:12.722720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:33.183 8.3160% ( 3) 00:10:33.183 2.619 - 2.631: 98.3234% ( 1) 00:10:33.183 2.631 - 2.643: 98.3605% ( 5) 00:10:33.183 2.643 - 2.655: 98.3680% ( 1) 00:10:33.183 2.655 - 2.667: 98.3754% ( 1) 00:10:33.183 2.690 - 2.702: 98.3828% ( 1) 00:10:33.183 2.714 - 2.726: 98.3902% ( 1) 00:10:33.183 2.726 - 2.738: 98.3976% ( 1) 00:10:33.183 2.738 - 2.750: 98.4050% ( 1) 00:10:33.183 2.750 - 2.761: 98.4125% ( 1) 00:10:33.183 2.761 - 2.773: 98.4199% ( 1) 00:10:33.183 2.844 - 2.856: 98.4273% ( 1) 00:10:33.183 2.880 - 2.892: 98.4347% ( 1) 00:10:33.183 3.532 - 3.556: 98.4496% ( 2) 00:10:33.183 3.556 - 3.579: 98.4644% ( 2) 00:10:33.183 3.627 - 3.650: 98.4941% ( 4) 00:10:33.183 3.674 - 3.698: 98.5015% ( 1) 00:10:33.183 3.721 - 3.745: 98.5089% ( 1) 00:10:33.183 3.745 - 3.769: 98.5163% ( 1) 00:10:33.183 3.769 - 3.793: 98.5386% ( 3) 00:10:33.183 3.793 - 3.816: 98.5460% ( 1) 00:10:33.183 3.840 - 3.864: 98.5534% ( 1) 00:10:33.183 3.887 - 3.911: 98.5608% ( 1) 00:10:33.183 3.982 - 4.006: 98.5682% ( 1) 00:10:33.183 4.030 - 4.053: 98.5757% ( 1) 00:10:33.183 4.053 - 4.077: 98.5831% ( 1) 00:10:33.183 4.077 - 4.101: 98.5905% ( 1) 00:10:33.183 4.101 - 4.124: 98.5979% ( 1) 00:10:33.183 4.124 - 4.148: 98.6053% ( 1) 00:10:33.183 4.219 - 4.243: 98.6128% ( 1) 00:10:33.183 5.570 - 5.594: 98.6202% ( 1) 00:10:33.183 5.689 - 5.713: 98.6276% ( 1) 00:10:33.183 6.021 - 6.044: 98.6350% ( 1) 00:10:33.183 6.044 - 6.068: 98.6424% ( 1) 00:10:33.183 6.116 - 6.163: 98.6499% ( 1) 00:10:33.183 6.210 - 6.258: 98.6573% ( 1) 00:10:33.183 6.258 - 6.305: 98.6721% ( 2) 00:10:33.183 6.353 - 6.400: 98.6869% ( 2) 00:10:33.183 6.447 - 6.495: 98.7018% ( 2) 00:10:33.183 6.542 - 6.590: 98.7092% ( 1) 00:10:33.183 6.590 - 6.637: 98.7315% ( 3) 00:10:33.183 6.637 - 6.684: 98.7389% ( 1) 00:10:33.183 6.732 - 6.779: 98.7463% ( 1) 00:10:33.183 6.921 - 6.969: 98.7537% ( 1) 00:10:33.183 7.443 - 7.490: 98.7611% ( 1) 00:10:33.183 7.775 - 7.822: 98.7685% ( 1) 00:10:33.183 8.012 - 8.059: 98.7760% ( 1) 00:10:33.183 8.201 - 8.249: 98.7834% ( 1) 00:10:33.183 8.391 - 8.439: 98.7908% ( 1) 00:10:33.183 8.533 - 8.581: 98.7982% ( 1) 00:10:33.183 9.007 - 9.055: 98.8131% ( 2) 00:10:33.183 15.455 - 15.550: 98.8205% ( 1) 00:10:33.183 15.644 - 15.739: 98.8279% ( 1) 00:10:33.183 15.739 - 15.834: 98.8353% ( 1) 00:10:33.183 15.834 - 15.929: 98.8650% ( 4) 00:10:33.183 15.929 - 16.024: 98.9021% ( 5) 00:10:33.183 16.024 - 16.119: 98.9243% ( 3) 00:10:33.183 16.119 - 16.213: 98.9540% ( 4) 00:10:33.183 16.213 - 16.308: 98.9837% ( 4) 00:10:33.183 16.308 - 16.403: 99.0208% ( 5) 00:10:33.183 16.403 - 16.498: 99.0579% ( 5) 00:10:33.183 16.498 - 16.593: 99.1172% ( 8) 00:10:33.183 16.593 - 16.687: 99.1320% ( 2) 00:10:33.183 16.687 - 16.782: 99.1691% ( 5) 00:10:33.183 16.782 - 16.877: 99.2062% ( 5) 00:10:33.183 16.877 - 16.972: 99.2507% ( 6) 00:10:33.183 16.972 - 17.067: 99.2656% ( 2) 00:10:33.183 17.067 - 17.161: 99.2878% ( 3) 00:10:33.183 17.161 - 17.256: 99.3175% ( 4) 00:10:33.183 17.256 - 17.351: 99.3249% ( 1) 00:10:33.183 17.541 - 17.636: 99.3398% ( 2) 00:10:33.183 17.730 - 17.825: 99.3620% ( 3) 00:10:33.183 17.825 - 17.920: 99.3694% ( 1) 00:10:33.183 17.920 - 18.015: 99.3769% ( 1) 00:10:33.183 18.110 - 18.204: 99.3843% ( 1) 00:10:33.183 18.204 - 18.299: 99.3917% ( 1) 00:10:33.183 18.489 - 18.584: 99.3991% ( 1) 00:10:33.183 23.419 - 23.514: 99.4065% ( 1) 00:10:33.183 3980.705 - 4004.978: 99.8516% ( 60) 00:10:33.183 4004.978 - 4029.250: 100.0000% ( 20) 00:10:33.183 00:10:33.183 16:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:33.183 16:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:33.183 16:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:33.183 16:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:33.183 16:27:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:33.442 [ 00:10:33.442 { 00:10:33.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:33.442 "subtype": "Discovery", 00:10:33.442 "listen_addresses": [], 00:10:33.442 "allow_any_host": true, 00:10:33.442 "hosts": [] 00:10:33.442 }, 00:10:33.442 { 00:10:33.442 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:33.442 "subtype": "NVMe", 00:10:33.442 "listen_addresses": [ 00:10:33.442 { 00:10:33.442 "trtype": "VFIOUSER", 00:10:33.442 "adrfam": "IPv4", 00:10:33.442 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:33.442 "trsvcid": "0" 00:10:33.442 } 00:10:33.442 ], 00:10:33.442 "allow_any_host": true, 00:10:33.442 "hosts": [], 00:10:33.442 "serial_number": "SPDK1", 00:10:33.442 "model_number": "SPDK bdev Controller", 00:10:33.442 "max_namespaces": 32, 00:10:33.442 "min_cntlid": 1, 00:10:33.442 "max_cntlid": 65519, 00:10:33.442 "namespaces": [ 00:10:33.442 { 00:10:33.442 "nsid": 1, 00:10:33.442 "bdev_name": "Malloc1", 00:10:33.442 "name": "Malloc1", 00:10:33.442 "nguid": "AFAEE8F48E274822AEED212F0295F855", 00:10:33.442 "uuid": "afaee8f4-8e27-4822-aeed-212f0295f855" 00:10:33.442 }, 00:10:33.442 { 00:10:33.442 "nsid": 2, 00:10:33.442 "bdev_name": "Malloc3", 00:10:33.442 "name": "Malloc3", 00:10:33.442 "nguid": "60AF95D8164841FB826343C973320167", 00:10:33.442 "uuid": "60af95d8-1648-41fb-8263-43c973320167" 00:10:33.442 } 00:10:33.442 ] 00:10:33.442 }, 00:10:33.442 { 00:10:33.442 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:33.442 "subtype": "NVMe", 00:10:33.442 "listen_addresses": [ 00:10:33.442 { 00:10:33.442 "trtype": "VFIOUSER", 00:10:33.442 "adrfam": "IPv4", 00:10:33.442 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:33.442 "trsvcid": "0" 00:10:33.442 } 00:10:33.442 ], 00:10:33.442 "allow_any_host": true, 00:10:33.442 "hosts": [], 00:10:33.442 "serial_number": "SPDK2", 00:10:33.442 "model_number": "SPDK bdev Controller", 00:10:33.442 "max_namespaces": 32, 00:10:33.442 "min_cntlid": 1, 00:10:33.442 "max_cntlid": 65519, 00:10:33.442 "namespaces": [ 00:10:33.442 { 00:10:33.442 "nsid": 1, 00:10:33.442 "bdev_name": "Malloc2", 00:10:33.442 "name": "Malloc2", 00:10:33.442 "nguid": "E825FAE27C044AB79BD4ADCC68258544", 00:10:33.442 "uuid": "e825fae2-7c04-4ab7-9bd4-adcc68258544" 00:10:33.442 } 00:10:33.442 ] 00:10:33.442 } 00:10:33.442 ] 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1464163 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:33.706 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:33.706 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.706 [2024-07-15 16:27:13.187681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:33.984 Malloc4 00:10:33.984 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:33.984 [2024-07-15 16:27:13.556257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:33.984 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:34.242 Asynchronous Event Request test 00:10:34.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.242 Registering asynchronous event callbacks... 00:10:34.242 Starting namespace attribute notice tests for all controllers... 00:10:34.242 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:34.242 aer_cb - Changed Namespace 00:10:34.242 Cleaning up... 00:10:34.242 [ 00:10:34.242 { 00:10:34.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:34.242 "subtype": "Discovery", 00:10:34.242 "listen_addresses": [], 00:10:34.242 "allow_any_host": true, 00:10:34.242 "hosts": [] 00:10:34.242 }, 00:10:34.242 { 00:10:34.242 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:34.242 "subtype": "NVMe", 00:10:34.242 "listen_addresses": [ 00:10:34.242 { 00:10:34.242 "trtype": "VFIOUSER", 00:10:34.242 "adrfam": "IPv4", 00:10:34.242 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:34.242 "trsvcid": "0" 00:10:34.242 } 00:10:34.242 ], 00:10:34.242 "allow_any_host": true, 00:10:34.242 "hosts": [], 00:10:34.242 "serial_number": "SPDK1", 00:10:34.242 "model_number": "SPDK bdev Controller", 00:10:34.242 "max_namespaces": 32, 00:10:34.242 "min_cntlid": 1, 00:10:34.242 "max_cntlid": 65519, 00:10:34.242 "namespaces": [ 00:10:34.242 { 00:10:34.242 "nsid": 1, 00:10:34.242 "bdev_name": "Malloc1", 00:10:34.242 "name": "Malloc1", 00:10:34.242 "nguid": "AFAEE8F48E274822AEED212F0295F855", 00:10:34.242 "uuid": "afaee8f4-8e27-4822-aeed-212f0295f855" 00:10:34.242 }, 00:10:34.242 { 00:10:34.242 "nsid": 2, 00:10:34.242 "bdev_name": "Malloc3", 00:10:34.242 "name": "Malloc3", 00:10:34.242 "nguid": "60AF95D8164841FB826343C973320167", 00:10:34.242 "uuid": "60af95d8-1648-41fb-8263-43c973320167" 00:10:34.242 } 00:10:34.242 ] 00:10:34.242 }, 00:10:34.242 { 00:10:34.242 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:34.242 "subtype": "NVMe", 00:10:34.242 "listen_addresses": [ 00:10:34.242 { 00:10:34.242 "trtype": "VFIOUSER", 00:10:34.242 "adrfam": "IPv4", 00:10:34.242 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:34.242 "trsvcid": "0" 00:10:34.242 } 00:10:34.242 ], 00:10:34.242 "allow_any_host": true, 00:10:34.242 "hosts": [], 00:10:34.242 "serial_number": "SPDK2", 00:10:34.242 "model_number": "SPDK bdev Controller", 00:10:34.242 "max_namespaces": 32, 00:10:34.242 "min_cntlid": 1, 00:10:34.242 "max_cntlid": 65519, 00:10:34.242 "namespaces": [ 00:10:34.242 { 00:10:34.242 "nsid": 1, 00:10:34.242 "bdev_name": "Malloc2", 00:10:34.242 "name": "Malloc2", 00:10:34.242 "nguid": "E825FAE27C044AB79BD4ADCC68258544", 00:10:34.242 "uuid": "e825fae2-7c04-4ab7-9bd4-adcc68258544" 00:10:34.242 }, 00:10:34.242 { 00:10:34.242 "nsid": 2, 00:10:34.242 "bdev_name": "Malloc4", 00:10:34.242 "name": "Malloc4", 00:10:34.242 "nguid": "6B07022673AA46C28BA20ED14B20295C", 00:10:34.242 "uuid": "6b070226-73aa-46c2-8ba2-0ed14b20295c" 00:10:34.242 } 00:10:34.242 ] 00:10:34.242 } 00:10:34.242 ] 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1464163 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1457933 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1457933 ']' 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1457933 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.242 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1457933 00:10:34.500 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:34.500 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:34.501 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1457933' 00:10:34.501 killing process with pid 1457933 00:10:34.501 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1457933 00:10:34.501 16:27:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1457933 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1464305 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1464305' 00:10:34.759 Process pid: 1464305 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1464305 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1464305 ']' 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.759 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:34.759 [2024-07-15 16:27:14.281468] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:34.759 [2024-07-15 16:27:14.282481] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:34.759 [2024-07-15 16:27:14.282539] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.759 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.759 [2024-07-15 16:27:14.340257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.019 [2024-07-15 16:27:14.449246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.019 [2024-07-15 16:27:14.449308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.019 [2024-07-15 16:27:14.449336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.019 [2024-07-15 16:27:14.449348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.019 [2024-07-15 16:27:14.449367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.019 [2024-07-15 16:27:14.449450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.019 [2024-07-15 16:27:14.449517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.019 [2024-07-15 16:27:14.449582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.019 [2024-07-15 16:27:14.449584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.019 [2024-07-15 16:27:14.555088] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:35.019 [2024-07-15 16:27:14.555306] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:35.019 [2024-07-15 16:27:14.555647] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:35.019 [2024-07-15 16:27:14.556318] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:35.019 [2024-07-15 16:27:14.556552] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:35.019 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.019 16:27:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:35.019 16:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:36.396 16:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:36.653 Malloc1 00:10:36.653 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:36.910 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:37.168 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:37.425 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:37.425 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:37.425 16:27:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:37.682 Malloc2 00:10:37.682 16:27:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:37.939 16:27:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:38.197 16:27:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1464305 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1464305 ']' 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1464305 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1464305 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1464305' 00:10:38.456 killing process with pid 1464305 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1464305 00:10:38.456 16:27:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1464305 00:10:38.714 16:27:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:38.714 16:27:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:38.714 00:10:38.714 real 0m52.646s 00:10:38.714 user 3m27.507s 00:10:38.714 sys 0m4.362s 00:10:38.714 16:27:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.714 16:27:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:38.714 ************************************ 00:10:38.714 END TEST nvmf_vfio_user 00:10:38.714 ************************************ 00:10:38.714 16:27:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.714 16:27:18 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:38.714 16:27:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.714 16:27:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.714 16:27:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.714 ************************************ 00:10:38.714 START TEST nvmf_vfio_user_nvme_compliance 00:10:38.714 ************************************ 00:10:38.714 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:38.971 * Looking for test storage... 00:10:38.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.971 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1464899 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1464899' 00:10:38.972 Process pid: 1464899 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1464899 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1464899 ']' 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.972 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.972 [2024-07-15 16:27:18.405309] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:38.972 [2024-07-15 16:27:18.405388] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.972 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.972 [2024-07-15 16:27:18.462629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.972 [2024-07-15 16:27:18.568720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.972 [2024-07-15 16:27:18.568787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.972 [2024-07-15 16:27:18.568825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.972 [2024-07-15 16:27:18.568837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.231 [2024-07-15 16:27:18.568847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.231 [2024-07-15 16:27:18.568988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.231 [2024-07-15 16:27:18.569034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.232 [2024-07-15 16:27:18.569038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.232 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.232 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:39.232 16:27:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:40.189 malloc0 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.189 16:27:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:40.448 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.448 00:10:40.448 00:10:40.448 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.448 http://cunit.sourceforge.net/ 00:10:40.448 00:10:40.448 00:10:40.448 Suite: nvme_compliance 00:10:40.448 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 16:27:19.917410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.448 [2024-07-15 16:27:19.918846] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:40.448 [2024-07-15 16:27:19.918893] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:40.448 [2024-07-15 16:27:19.918908] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:40.448 [2024-07-15 16:27:19.920432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.448 passed 00:10:40.448 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 16:27:20.008051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.448 [2024-07-15 16:27:20.011072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.708 passed 00:10:40.708 Test: admin_identify_ns ...[2024-07-15 16:27:20.100019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.708 [2024-07-15 16:27:20.161892] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:40.708 [2024-07-15 16:27:20.169893] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:40.708 [2024-07-15 16:27:20.191047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.708 passed 00:10:40.708 Test: admin_get_features_mandatory_features ...[2024-07-15 16:27:20.271832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.708 [2024-07-15 16:27:20.276881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.969 passed 00:10:40.969 Test: admin_get_features_optional_features ...[2024-07-15 16:27:20.362442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.969 [2024-07-15 16:27:20.365461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.969 passed 00:10:40.969 Test: admin_set_features_number_of_queues ...[2024-07-15 16:27:20.448642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.969 [2024-07-15 16:27:20.552115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.229 passed 00:10:41.229 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 16:27:20.635598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.229 [2024-07-15 16:27:20.638620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.229 passed 00:10:41.230 Test: admin_get_log_page_with_lpo ...[2024-07-15 16:27:20.722917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.230 [2024-07-15 16:27:20.792906] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:41.230 [2024-07-15 16:27:20.805971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.488 passed 00:10:41.488 Test: fabric_property_get ...[2024-07-15 16:27:20.888962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.488 [2024-07-15 16:27:20.890251] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:41.488 [2024-07-15 16:27:20.891985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.488 passed 00:10:41.488 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 16:27:20.976569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.488 [2024-07-15 16:27:20.977844] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:41.488 [2024-07-15 16:27:20.979588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.488 passed 00:10:41.488 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 16:27:21.064107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.746 [2024-07-15 16:27:21.151890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:41.746 [2024-07-15 16:27:21.167903] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:41.746 [2024-07-15 16:27:21.170014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.746 passed 00:10:41.746 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 16:27:21.253204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.746 [2024-07-15 16:27:21.254493] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:41.746 [2024-07-15 16:27:21.257216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:41.746 passed 00:10:41.747 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 16:27:21.341465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.007 [2024-07-15 16:27:21.416899] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:42.007 [2024-07-15 16:27:21.440889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:42.007 [2024-07-15 16:27:21.445998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.007 passed 00:10:42.007 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 16:27:21.530626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.007 [2024-07-15 16:27:21.531930] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:42.007 [2024-07-15 16:27:21.531986] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:42.007 [2024-07-15 16:27:21.533652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.007 passed 00:10:42.267 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 16:27:21.615107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.267 [2024-07-15 16:27:21.706884] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:42.267 [2024-07-15 16:27:21.714885] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:42.267 [2024-07-15 16:27:21.722884] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:42.267 [2024-07-15 16:27:21.730887] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:42.267 [2024-07-15 16:27:21.759986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.267 passed 00:10:42.267 Test: admin_create_io_sq_verify_pc ...[2024-07-15 16:27:21.846137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.267 [2024-07-15 16:27:21.862899] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:42.526 [2024-07-15 16:27:21.880517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.527 passed 00:10:42.527 Test: admin_create_io_qp_max_qps ...[2024-07-15 16:27:21.964091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.463 [2024-07-15 16:27:23.059896] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:44.030 [2024-07-15 16:27:23.443782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.030 passed 00:10:44.030 Test: admin_create_io_sq_shared_cq ...[2024-07-15 16:27:23.527086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.289 [2024-07-15 16:27:23.658885] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:44.289 [2024-07-15 16:27:23.695989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.289 passed 00:10:44.289 00:10:44.289 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.289 suites 1 1 n/a 0 0 00:10:44.289 tests 18 18 18 0 0 00:10:44.289 asserts 360 360 360 0 n/a 00:10:44.289 00:10:44.289 Elapsed time = 1.567 seconds 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1464899 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1464899 ']' 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1464899 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1464899 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1464899' 00:10:44.289 killing process with pid 1464899 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1464899 00:10:44.289 16:27:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1464899 00:10:44.548 16:27:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:44.549 16:27:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:44.549 00:10:44.549 real 0m5.756s 00:10:44.549 user 0m16.095s 00:10:44.549 sys 0m0.536s 00:10:44.549 16:27:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.549 16:27:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 ************************************ 00:10:44.549 END TEST nvmf_vfio_user_nvme_compliance 00:10:44.549 ************************************ 00:10:44.549 16:27:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.549 16:27:24 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:44.549 16:27:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.549 16:27:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.549 16:27:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 ************************************ 00:10:44.549 START TEST nvmf_vfio_user_fuzz 00:10:44.549 ************************************ 00:10:44.549 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:44.808 * Looking for test storage... 00:10:44.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1465624 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1465624' 00:10:44.808 Process pid: 1465624 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1465624 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1465624 ']' 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.808 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:45.067 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.067 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:45.067 16:27:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.004 malloc0 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:46.004 16:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:18.068 Fuzzing completed. Shutting down the fuzz application 00:11:18.068 00:11:18.068 Dumping successful admin opcodes: 00:11:18.068 8, 9, 10, 24, 00:11:18.068 Dumping successful io opcodes: 00:11:18.068 0, 00:11:18.068 NS: 0x200003a1ef00 I/O qp, Total commands completed: 607306, total successful commands: 2349, random_seed: 274886848 00:11:18.068 NS: 0x200003a1ef00 admin qp, Total commands completed: 80067, total successful commands: 631, random_seed: 2553801216 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1465624 ']' 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465624' 00:11:18.068 killing process with pid 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1465624 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:18.068 00:11:18.068 real 0m32.357s 00:11:18.068 user 0m31.807s 00:11:18.068 sys 0m30.008s 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.068 16:27:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:18.068 ************************************ 00:11:18.068 END TEST nvmf_vfio_user_fuzz 00:11:18.068 ************************************ 00:11:18.068 16:27:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:18.068 16:27:56 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:18.069 16:27:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:18.069 16:27:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.069 16:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.069 ************************************ 00:11:18.069 START TEST nvmf_host_management 00:11:18.069 ************************************ 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:18.069 * Looking for test storage... 00:11:18.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.069 16:27:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:19.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:19.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.003 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:19.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:19.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:11:19.004 00:11:19.004 --- 10.0.0.2 ping statistics --- 00:11:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.004 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:11:19.004 00:11:19.004 --- 10.0.0.1 ping statistics --- 00:11:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.004 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1471066 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1471066 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1471066 ']' 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.004 16:27:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.263 [2024-07-15 16:27:58.615792] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:19.263 [2024-07-15 16:27:58.615897] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.263 [2024-07-15 16:27:58.686229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.263 [2024-07-15 16:27:58.808493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.263 [2024-07-15 16:27:58.808562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.263 [2024-07-15 16:27:58.808580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.263 [2024-07-15 16:27:58.808594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.263 [2024-07-15 16:27:58.808610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.263 [2024-07-15 16:27:58.808788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.263 [2024-07-15 16:27:58.808840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.263 [2024-07-15 16:27:58.808905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:19.263 [2024-07-15 16:27:58.808909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 [2024-07-15 16:27:59.575832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 Malloc0 00:11:20.196 [2024-07-15 16:27:59.636933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1471243 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1471243 /var/tmp/bdevperf.sock 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1471243 ']' 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:20.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.196 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:20.196 { 00:11:20.197 "params": { 00:11:20.197 "name": "Nvme$subsystem", 00:11:20.197 "trtype": "$TEST_TRANSPORT", 00:11:20.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:20.197 "adrfam": "ipv4", 00:11:20.197 "trsvcid": "$NVMF_PORT", 00:11:20.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:20.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:20.197 "hdgst": ${hdgst:-false}, 00:11:20.197 "ddgst": ${ddgst:-false} 00:11:20.197 }, 00:11:20.197 "method": "bdev_nvme_attach_controller" 00:11:20.197 } 00:11:20.197 EOF 00:11:20.197 )") 00:11:20.197 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:20.197 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:20.197 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:20.197 16:27:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:20.197 "params": { 00:11:20.197 "name": "Nvme0", 00:11:20.197 "trtype": "tcp", 00:11:20.197 "traddr": "10.0.0.2", 00:11:20.197 "adrfam": "ipv4", 00:11:20.197 "trsvcid": "4420", 00:11:20.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:20.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:20.197 "hdgst": false, 00:11:20.197 "ddgst": false 00:11:20.197 }, 00:11:20.197 "method": "bdev_nvme_attach_controller" 00:11:20.197 }' 00:11:20.197 [2024-07-15 16:27:59.718088] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:20.197 [2024-07-15 16:27:59.718164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471243 ] 00:11:20.197 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.197 [2024-07-15 16:27:59.778180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.455 [2024-07-15 16:27:59.888340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.714 Running I/O for 10 seconds... 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:11:20.714 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.023 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 [2024-07-15 16:28:00.502102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.023 [2024-07-15 16:28:00.502360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.502998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.503011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4380 is same with the state(5) to be set 00:11:21.024 [2024-07-15 16:28:00.503143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.024 [2024-07-15 16:28:00.503380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.024 [2024-07-15 16:28:00.503396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.503981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.503997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.025 [2024-07-15 16:28:00.504213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.025 [2024-07-15 16:28:00.504231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.025 [2024-07-15 16:28:00.504247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:21.026 [2024-07-15 16:28:00.504383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.026 [2024-07-15 16:28:00.504508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:21.026 [2024-07-15 16:28:00.504618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.504981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.504997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.505012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.505028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.505043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.505059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.505073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.505089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.505103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.505119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.026 [2024-07-15 16:28:00.505133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.026 [2024-07-15 16:28:00.505149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.027 [2024-07-15 16:28:00.505164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:21.027 [2024-07-15 16:28:00.505193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cda900 is same with the state(5) to be set 00:11:21.027 [2024-07-15 16:28:00.505286] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cda900 was disconnected and freed. reset controller. 00:11:21.027 [2024-07-15 16:28:00.505352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.027 [2024-07-15 16:28:00.505375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.027 [2024-07-15 16:28:00.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.027 [2024-07-15 16:28:00.505443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.027 [2024-07-15 16:28:00.505470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.027 [2024-07-15 16:28:00.505483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9790 is same with the state(5) to be set 00:11:21.027 [2024-07-15 16:28:00.506657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:21.027 task offset: 65536 on job bdev=Nvme0n1 fails 00:11:21.027 00:11:21.027 Latency(us) 00:11:21.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.027 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:21.027 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:21.027 Verification LBA range: start 0x0 length 0x400 00:11:21.027 Nvme0n1 : 0.39 1301.33 81.33 162.67 0.00 42449.75 5922.51 39807.05 00:11:21.027 =================================================================================================================== 00:11:21.027 Total : 1301.33 81.33 162.67 0.00 42449.75 5922.51 39807.05 00:11:21.027 [2024-07-15 16:28:00.508711] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:21.027 [2024-07-15 16:28:00.508753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9790 (9): Bad file descriptor 00:11:21.027 16:28:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.027 16:28:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:21.027 [2024-07-15 16:28:00.563625] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1471243 00:11:21.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1471243) - No such process 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.962 { 00:11:21.962 "params": { 00:11:21.962 "name": "Nvme$subsystem", 00:11:21.962 "trtype": "$TEST_TRANSPORT", 00:11:21.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.962 "adrfam": "ipv4", 00:11:21.962 "trsvcid": "$NVMF_PORT", 00:11:21.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.962 "hdgst": ${hdgst:-false}, 00:11:21.962 "ddgst": ${ddgst:-false} 00:11:21.962 }, 00:11:21.962 "method": "bdev_nvme_attach_controller" 00:11:21.962 } 00:11:21.962 EOF 00:11:21.962 )") 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:21.962 16:28:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.962 "params": { 00:11:21.962 "name": "Nvme0", 00:11:21.962 "trtype": "tcp", 00:11:21.962 "traddr": "10.0.0.2", 00:11:21.962 "adrfam": "ipv4", 00:11:21.962 "trsvcid": "4420", 00:11:21.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:21.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:21.962 "hdgst": false, 00:11:21.962 "ddgst": false 00:11:21.962 }, 00:11:21.962 "method": "bdev_nvme_attach_controller" 00:11:21.962 }' 00:11:22.221 [2024-07-15 16:28:01.561371] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:22.221 [2024-07-15 16:28:01.561481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471520 ] 00:11:22.221 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.221 [2024-07-15 16:28:01.621116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.221 [2024-07-15 16:28:01.732784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.479 Running I/O for 1 seconds... 00:11:23.416 00:11:23.416 Latency(us) 00:11:23.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.416 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:23.416 Verification LBA range: start 0x0 length 0x400 00:11:23.416 Nvme0n1 : 1.04 1169.88 73.12 0.00 0.00 53868.25 10922.67 42331.40 00:11:23.416 =================================================================================================================== 00:11:23.416 Total : 1169.88 73.12 0.00 0.00 53868.25 10922.67 42331.40 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.675 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.675 rmmod nvme_tcp 00:11:23.675 rmmod nvme_fabrics 00:11:23.675 rmmod nvme_keyring 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1471066 ']' 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1471066 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1471066 ']' 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1471066 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1471066 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1471066' 00:11:23.933 killing process with pid 1471066 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1471066 00:11:23.933 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1471066 00:11:24.192 [2024-07-15 16:28:03.569261] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.192 16:28:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.103 16:28:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.103 16:28:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:26.103 00:11:26.103 real 0m9.128s 00:11:26.103 user 0m22.042s 00:11:26.103 sys 0m2.523s 00:11:26.103 16:28:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.103 16:28:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 ************************************ 00:11:26.103 END TEST nvmf_host_management 00:11:26.103 ************************************ 00:11:26.103 16:28:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:26.103 16:28:05 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:26.103 16:28:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.103 16:28:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.103 16:28:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 ************************************ 00:11:26.103 START TEST nvmf_lvol 00:11:26.103 ************************************ 00:11:26.103 16:28:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:26.363 * Looking for test storage... 00:11:26.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.363 16:28:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:11:28.269 00:11:28.269 --- 10.0.0.2 ping statistics --- 00:11:28.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.269 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:11:28.269 00:11:28.269 --- 10.0.0.1 ping statistics --- 00:11:28.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.269 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.269 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1473602 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1473602 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1473602 ']' 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.528 16:28:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.528 [2024-07-15 16:28:07.937544] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:28.528 [2024-07-15 16:28:07.937629] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.528 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.528 [2024-07-15 16:28:08.006724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.528 [2024-07-15 16:28:08.122397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.528 [2024-07-15 16:28:08.122458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.528 [2024-07-15 16:28:08.122475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.528 [2024-07-15 16:28:08.122488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.528 [2024-07-15 16:28:08.122500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.528 [2024-07-15 16:28:08.122586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.528 [2024-07-15 16:28:08.122654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.528 [2024-07-15 16:28:08.122657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.463 16:28:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.721 [2024-07-15 16:28:09.171797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.721 16:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.980 16:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:29.980 16:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.238 16:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:30.238 16:28:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:30.496 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:31.063 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37305480-a8d8-4b2d-aba7-91af668ae7d8 00:11:31.063 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37305480-a8d8-4b2d-aba7-91af668ae7d8 lvol 20 00:11:31.063 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b76b9b0c-a7e0-40d7-a0ed-06bc9ae7c10e 00:11:31.063 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:31.321 16:28:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b76b9b0c-a7e0-40d7-a0ed-06bc9ae7c10e 00:11:31.579 16:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:31.837 [2024-07-15 16:28:11.331449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.837 16:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:32.095 16:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1474054 00:11:32.095 16:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:32.095 16:28:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:32.095 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.032 16:28:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b76b9b0c-a7e0-40d7-a0ed-06bc9ae7c10e MY_SNAPSHOT 00:11:33.601 16:28:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cb7990fc-d860-4aa8-bca8-898f89d3b555 00:11:33.601 16:28:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b76b9b0c-a7e0-40d7-a0ed-06bc9ae7c10e 30 00:11:33.861 16:28:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cb7990fc-d860-4aa8-bca8-898f89d3b555 MY_CLONE 00:11:34.121 16:28:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=de3c2f18-ee95-4c93-91eb-237291283580 00:11:34.121 16:28:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate de3c2f18-ee95-4c93-91eb-237291283580 00:11:34.692 16:28:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1474054 00:11:42.906 Initializing NVMe Controllers 00:11:42.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:42.906 Controller IO queue size 128, less than required. 00:11:42.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:42.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:42.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:42.906 Initialization complete. Launching workers. 00:11:42.906 ======================================================== 00:11:42.906 Latency(us) 00:11:42.906 Device Information : IOPS MiB/s Average min max 00:11:42.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10603.20 41.42 12073.21 580.12 125915.97 00:11:42.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10518.80 41.09 12169.17 2091.00 71401.76 00:11:42.906 ======================================================== 00:11:42.906 Total : 21122.00 82.51 12121.00 580.12 125915.97 00:11:42.906 00:11:42.906 16:28:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:42.906 16:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b76b9b0c-a7e0-40d7-a0ed-06bc9ae7c10e 00:11:42.906 16:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37305480-a8d8-4b2d-aba7-91af668ae7d8 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.472 rmmod nvme_tcp 00:11:43.472 rmmod nvme_fabrics 00:11:43.472 rmmod nvme_keyring 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1473602 ']' 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1473602 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1473602 ']' 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1473602 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473602 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473602' 00:11:43.472 killing process with pid 1473602 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1473602 00:11:43.472 16:28:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1473602 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.730 16:28:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.637 16:28:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.637 00:11:45.637 real 0m19.542s 00:11:45.637 user 1m7.148s 00:11:45.637 sys 0m5.342s 00:11:45.637 16:28:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.637 16:28:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:45.637 ************************************ 00:11:45.637 END TEST nvmf_lvol 00:11:45.637 ************************************ 00:11:45.896 16:28:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:45.896 16:28:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:45.896 16:28:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.896 16:28:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.896 16:28:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.896 ************************************ 00:11:45.896 START TEST nvmf_lvs_grow 00:11:45.896 ************************************ 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:45.896 * Looking for test storage... 00:11:45.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.896 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.897 16:28:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.800 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.800 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.801 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:48.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:48.061 00:11:48.061 --- 10.0.0.2 ping statistics --- 00:11:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.061 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:11:48.061 00:11:48.061 --- 10.0.0.1 ping statistics --- 00:11:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.061 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1477422 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1477422 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1477422 ']' 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.061 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.061 [2024-07-15 16:28:27.575463] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:48.061 [2024-07-15 16:28:27.575535] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.061 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.061 [2024-07-15 16:28:27.636758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.320 [2024-07-15 16:28:27.745001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.320 [2024-07-15 16:28:27.745063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.320 [2024-07-15 16:28:27.745076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.320 [2024-07-15 16:28:27.745087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.320 [2024-07-15 16:28:27.745097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.320 [2024-07-15 16:28:27.745125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.320 16:28:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:48.579 [2024-07-15 16:28:28.165478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.838 ************************************ 00:11:48.838 START TEST lvs_grow_clean 00:11:48.838 ************************************ 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.838 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:49.097 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:49.097 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:49.356 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b82d8819-11f1-4a79-8080-00fa5deacd3a 00:11:49.356 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:11:49.356 16:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:49.617 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:49.617 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:49.617 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b82d8819-11f1-4a79-8080-00fa5deacd3a lvol 150 00:11:49.877 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b54f4286-6c81-4d8d-a7f5-d0508930c3a8 00:11:49.877 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.877 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:50.135 [2024-07-15 16:28:29.557187] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:50.135 [2024-07-15 16:28:29.557301] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:50.135 true 00:11:50.135 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:11:50.135 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:50.393 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:50.393 16:28:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:50.652 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b54f4286-6c81-4d8d-a7f5-d0508930c3a8 00:11:50.911 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:51.170 [2024-07-15 16:28:30.688617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.170 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1477857 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1477857 /var/tmp/bdevperf.sock 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1477857 ']' 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.429 16:28:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 [2024-07-15 16:28:31.009496] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:51.429 [2024-07-15 16:28:31.009581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477857 ] 00:11:51.689 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.689 [2024-07-15 16:28:31.075556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.689 [2024-07-15 16:28:31.192136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.948 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.948 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:51.948 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:52.207 Nvme0n1 00:11:52.207 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:52.466 [ 00:11:52.466 { 00:11:52.466 "name": "Nvme0n1", 00:11:52.466 "aliases": [ 00:11:52.466 "b54f4286-6c81-4d8d-a7f5-d0508930c3a8" 00:11:52.466 ], 00:11:52.466 "product_name": "NVMe disk", 00:11:52.466 "block_size": 4096, 00:11:52.466 "num_blocks": 38912, 00:11:52.466 "uuid": "b54f4286-6c81-4d8d-a7f5-d0508930c3a8", 00:11:52.466 "assigned_rate_limits": { 00:11:52.466 "rw_ios_per_sec": 0, 00:11:52.466 "rw_mbytes_per_sec": 0, 00:11:52.466 "r_mbytes_per_sec": 0, 00:11:52.466 "w_mbytes_per_sec": 0 00:11:52.466 }, 00:11:52.466 "claimed": false, 00:11:52.466 "zoned": false, 00:11:52.466 "supported_io_types": { 00:11:52.466 "read": true, 00:11:52.466 "write": true, 00:11:52.466 "unmap": true, 00:11:52.466 "flush": true, 00:11:52.466 "reset": true, 00:11:52.466 "nvme_admin": true, 00:11:52.466 "nvme_io": true, 00:11:52.466 "nvme_io_md": false, 00:11:52.466 "write_zeroes": true, 00:11:52.466 "zcopy": false, 00:11:52.466 "get_zone_info": false, 00:11:52.466 "zone_management": false, 00:11:52.466 "zone_append": false, 00:11:52.466 "compare": true, 00:11:52.466 "compare_and_write": true, 00:11:52.466 "abort": true, 00:11:52.466 "seek_hole": false, 00:11:52.466 "seek_data": false, 00:11:52.466 "copy": true, 00:11:52.466 "nvme_iov_md": false 00:11:52.466 }, 00:11:52.466 "memory_domains": [ 00:11:52.466 { 00:11:52.466 "dma_device_id": "system", 00:11:52.466 "dma_device_type": 1 00:11:52.466 } 00:11:52.466 ], 00:11:52.466 "driver_specific": { 00:11:52.466 "nvme": [ 00:11:52.466 { 00:11:52.466 "trid": { 00:11:52.466 "trtype": "TCP", 00:11:52.466 "adrfam": "IPv4", 00:11:52.466 "traddr": "10.0.0.2", 00:11:52.466 "trsvcid": "4420", 00:11:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:52.466 }, 00:11:52.466 "ctrlr_data": { 00:11:52.466 "cntlid": 1, 00:11:52.466 "vendor_id": "0x8086", 00:11:52.466 "model_number": "SPDK bdev Controller", 00:11:52.466 "serial_number": "SPDK0", 00:11:52.466 "firmware_revision": "24.09", 00:11:52.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.466 "oacs": { 00:11:52.466 "security": 0, 00:11:52.466 "format": 0, 00:11:52.466 "firmware": 0, 00:11:52.466 "ns_manage": 0 00:11:52.466 }, 00:11:52.466 "multi_ctrlr": true, 00:11:52.466 "ana_reporting": false 00:11:52.466 }, 00:11:52.466 "vs": { 00:11:52.466 "nvme_version": "1.3" 00:11:52.466 }, 00:11:52.466 "ns_data": { 00:11:52.466 "id": 1, 00:11:52.466 "can_share": true 00:11:52.466 } 00:11:52.466 } 00:11:52.466 ], 00:11:52.466 "mp_policy": "active_passive" 00:11:52.466 } 00:11:52.466 } 00:11:52.466 ] 00:11:52.466 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1477914 00:11:52.466 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:52.466 16:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:52.466 Running I/O for 10 seconds... 00:11:53.848 Latency(us) 00:11:53.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.848 Nvme0n1 : 1.00 14500.00 56.64 0.00 0.00 0.00 0.00 0.00 00:11:53.848 =================================================================================================================== 00:11:53.848 Total : 14500.00 56.64 0.00 0.00 0.00 0.00 0.00 00:11:53.848 00:11:54.415 16:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:11:54.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.673 Nvme0n1 : 2.00 14662.50 57.28 0.00 0.00 0.00 0.00 0.00 00:11:54.673 =================================================================================================================== 00:11:54.673 Total : 14662.50 57.28 0.00 0.00 0.00 0.00 0.00 00:11:54.673 00:11:54.673 true 00:11:54.673 16:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:11:54.673 16:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:54.933 16:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:54.933 16:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:54.933 16:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1477914 00:11:55.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.519 Nvme0n1 : 3.00 14838.67 57.96 0.00 0.00 0.00 0.00 0.00 00:11:55.519 =================================================================================================================== 00:11:55.519 Total : 14838.67 57.96 0.00 0.00 0.00 0.00 0.00 00:11:55.519 00:11:56.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.458 Nvme0n1 : 4.00 14912.75 58.25 0.00 0.00 0.00 0.00 0.00 00:11:56.458 =================================================================================================================== 00:11:56.458 Total : 14912.75 58.25 0.00 0.00 0.00 0.00 0.00 00:11:56.458 00:11:57.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.841 Nvme0n1 : 5.00 14953.80 58.41 0.00 0.00 0.00 0.00 0.00 00:11:57.841 =================================================================================================================== 00:11:57.841 Total : 14953.80 58.41 0.00 0.00 0.00 0.00 0.00 00:11:57.841 00:11:58.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.779 Nvme0n1 : 6.00 15034.83 58.73 0.00 0.00 0.00 0.00 0.00 00:11:58.779 =================================================================================================================== 00:11:58.779 Total : 15034.83 58.73 0.00 0.00 0.00 0.00 0.00 00:11:58.779 00:11:59.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.718 Nvme0n1 : 7.00 15029.57 58.71 0.00 0.00 0.00 0.00 0.00 00:11:59.718 =================================================================================================================== 00:11:59.718 Total : 15029.57 58.71 0.00 0.00 0.00 0.00 0.00 00:11:59.718 00:12:00.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.654 Nvme0n1 : 8.00 15017.12 58.66 0.00 0.00 0.00 0.00 0.00 00:12:00.654 =================================================================================================================== 00:12:00.654 Total : 15017.12 58.66 0.00 0.00 0.00 0.00 0.00 00:12:00.654 00:12:01.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.591 Nvme0n1 : 9.00 15004.33 58.61 0.00 0.00 0.00 0.00 0.00 00:12:01.591 =================================================================================================================== 00:12:01.591 Total : 15004.33 58.61 0.00 0.00 0.00 0.00 0.00 00:12:01.591 00:12:02.528 00:12:02.528 Latency(us) 00:12:02.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.528 Nvme0n1 : 10.00 15005.31 58.61 0.00 0.00 8525.02 2524.35 16699.54 00:12:02.528 =================================================================================================================== 00:12:02.528 Total : 15005.31 58.61 0.00 0.00 8525.02 2524.35 16699.54 00:12:02.528 0 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1477857 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1477857 ']' 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1477857 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1477857 00:12:02.528 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:02.529 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:02.529 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1477857' 00:12:02.529 killing process with pid 1477857 00:12:02.529 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1477857 00:12:02.529 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.529 00:12:02.529 Latency(us) 00:12:02.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.529 =================================================================================================================== 00:12:02.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.529 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1477857 00:12:02.787 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.045 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:03.611 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:03.611 16:28:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:03.611 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:03.611 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:03.611 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.869 [2024-07-15 16:28:43.447822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:04.129 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:04.389 request: 00:12:04.389 { 00:12:04.389 "uuid": "b82d8819-11f1-4a79-8080-00fa5deacd3a", 00:12:04.389 "method": "bdev_lvol_get_lvstores", 00:12:04.389 "req_id": 1 00:12:04.389 } 00:12:04.389 Got JSON-RPC error response 00:12:04.389 response: 00:12:04.389 { 00:12:04.389 "code": -19, 00:12:04.389 "message": "No such device" 00:12:04.389 } 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:04.389 aio_bdev 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b54f4286-6c81-4d8d-a7f5-d0508930c3a8 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=b54f4286-6c81-4d8d-a7f5-d0508930c3a8 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:04.389 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:04.390 16:28:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:04.955 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b54f4286-6c81-4d8d-a7f5-d0508930c3a8 -t 2000 00:12:04.955 [ 00:12:04.955 { 00:12:04.955 "name": "b54f4286-6c81-4d8d-a7f5-d0508930c3a8", 00:12:04.955 "aliases": [ 00:12:04.955 "lvs/lvol" 00:12:04.955 ], 00:12:04.955 "product_name": "Logical Volume", 00:12:04.955 "block_size": 4096, 00:12:04.955 "num_blocks": 38912, 00:12:04.955 "uuid": "b54f4286-6c81-4d8d-a7f5-d0508930c3a8", 00:12:04.955 "assigned_rate_limits": { 00:12:04.955 "rw_ios_per_sec": 0, 00:12:04.955 "rw_mbytes_per_sec": 0, 00:12:04.955 "r_mbytes_per_sec": 0, 00:12:04.955 "w_mbytes_per_sec": 0 00:12:04.955 }, 00:12:04.955 "claimed": false, 00:12:04.955 "zoned": false, 00:12:04.955 "supported_io_types": { 00:12:04.955 "read": true, 00:12:04.955 "write": true, 00:12:04.955 "unmap": true, 00:12:04.955 "flush": false, 00:12:04.955 "reset": true, 00:12:04.955 "nvme_admin": false, 00:12:04.955 "nvme_io": false, 00:12:04.955 "nvme_io_md": false, 00:12:04.955 "write_zeroes": true, 00:12:04.955 "zcopy": false, 00:12:04.955 "get_zone_info": false, 00:12:04.955 "zone_management": false, 00:12:04.955 "zone_append": false, 00:12:04.955 "compare": false, 00:12:04.955 "compare_and_write": false, 00:12:04.955 "abort": false, 00:12:04.955 "seek_hole": true, 00:12:04.955 "seek_data": true, 00:12:04.955 "copy": false, 00:12:04.955 "nvme_iov_md": false 00:12:04.955 }, 00:12:04.955 "driver_specific": { 00:12:04.955 "lvol": { 00:12:04.955 "lvol_store_uuid": "b82d8819-11f1-4a79-8080-00fa5deacd3a", 00:12:04.955 "base_bdev": "aio_bdev", 00:12:04.955 "thin_provision": false, 00:12:04.955 "num_allocated_clusters": 38, 00:12:04.955 "snapshot": false, 00:12:04.955 "clone": false, 00:12:04.955 "esnap_clone": false 00:12:04.955 } 00:12:04.955 } 00:12:04.955 } 00:12:04.955 ] 00:12:04.955 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:04.955 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:04.955 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:05.213 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:05.213 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:05.213 16:28:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:05.470 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:05.470 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b54f4286-6c81-4d8d-a7f5-d0508930c3a8 00:12:06.039 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b82d8819-11f1-4a79-8080-00fa5deacd3a 00:12:06.298 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.556 00:12:06.556 real 0m17.748s 00:12:06.556 user 0m16.425s 00:12:06.556 sys 0m2.241s 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:06.556 ************************************ 00:12:06.556 END TEST lvs_grow_clean 00:12:06.556 ************************************ 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.556 16:28:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.556 ************************************ 00:12:06.556 START TEST lvs_grow_dirty 00:12:06.556 ************************************ 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.556 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.814 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:06.814 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:07.072 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:07.072 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:07.072 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:07.329 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:07.330 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:07.330 16:28:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 lvol 150 00:12:07.589 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:07.589 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:07.589 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:07.846 [2024-07-15 16:28:47.318140] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:07.846 [2024-07-15 16:28:47.318258] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:07.846 true 00:12:07.846 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:07.846 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:08.103 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:08.103 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:08.362 16:28:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:08.621 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:08.879 [2024-07-15 16:28:48.349262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.879 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1479915 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1479915 /var/tmp/bdevperf.sock 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1479915 ']' 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:09.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.138 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 [2024-07-15 16:28:48.669529] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:09.138 [2024-07-15 16:28:48.669609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479915 ] 00:12:09.138 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.138 [2024-07-15 16:28:48.734729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.432 [2024-07-15 16:28:48.853295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.432 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.432 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:09.432 16:28:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:10.014 Nvme0n1 00:12:10.014 16:28:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:10.270 [ 00:12:10.270 { 00:12:10.270 "name": "Nvme0n1", 00:12:10.270 "aliases": [ 00:12:10.270 "3b8d6323-617d-453c-8f8b-87a47340c17b" 00:12:10.270 ], 00:12:10.270 "product_name": "NVMe disk", 00:12:10.270 "block_size": 4096, 00:12:10.270 "num_blocks": 38912, 00:12:10.270 "uuid": "3b8d6323-617d-453c-8f8b-87a47340c17b", 00:12:10.270 "assigned_rate_limits": { 00:12:10.270 "rw_ios_per_sec": 0, 00:12:10.270 "rw_mbytes_per_sec": 0, 00:12:10.271 "r_mbytes_per_sec": 0, 00:12:10.271 "w_mbytes_per_sec": 0 00:12:10.271 }, 00:12:10.271 "claimed": false, 00:12:10.271 "zoned": false, 00:12:10.271 "supported_io_types": { 00:12:10.271 "read": true, 00:12:10.271 "write": true, 00:12:10.271 "unmap": true, 00:12:10.271 "flush": true, 00:12:10.271 "reset": true, 00:12:10.271 "nvme_admin": true, 00:12:10.271 "nvme_io": true, 00:12:10.271 "nvme_io_md": false, 00:12:10.271 "write_zeroes": true, 00:12:10.271 "zcopy": false, 00:12:10.271 "get_zone_info": false, 00:12:10.271 "zone_management": false, 00:12:10.271 "zone_append": false, 00:12:10.271 "compare": true, 00:12:10.271 "compare_and_write": true, 00:12:10.271 "abort": true, 00:12:10.271 "seek_hole": false, 00:12:10.271 "seek_data": false, 00:12:10.271 "copy": true, 00:12:10.271 "nvme_iov_md": false 00:12:10.271 }, 00:12:10.271 "memory_domains": [ 00:12:10.271 { 00:12:10.271 "dma_device_id": "system", 00:12:10.271 "dma_device_type": 1 00:12:10.271 } 00:12:10.271 ], 00:12:10.271 "driver_specific": { 00:12:10.271 "nvme": [ 00:12:10.271 { 00:12:10.271 "trid": { 00:12:10.271 "trtype": "TCP", 00:12:10.271 "adrfam": "IPv4", 00:12:10.271 "traddr": "10.0.0.2", 00:12:10.271 "trsvcid": "4420", 00:12:10.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:10.271 }, 00:12:10.271 "ctrlr_data": { 00:12:10.271 "cntlid": 1, 00:12:10.271 "vendor_id": "0x8086", 00:12:10.271 "model_number": "SPDK bdev Controller", 00:12:10.271 "serial_number": "SPDK0", 00:12:10.271 "firmware_revision": "24.09", 00:12:10.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:10.271 "oacs": { 00:12:10.271 "security": 0, 00:12:10.271 "format": 0, 00:12:10.271 "firmware": 0, 00:12:10.271 "ns_manage": 0 00:12:10.271 }, 00:12:10.271 "multi_ctrlr": true, 00:12:10.271 "ana_reporting": false 00:12:10.271 }, 00:12:10.271 "vs": { 00:12:10.271 "nvme_version": "1.3" 00:12:10.271 }, 00:12:10.271 "ns_data": { 00:12:10.271 "id": 1, 00:12:10.271 "can_share": true 00:12:10.271 } 00:12:10.271 } 00:12:10.271 ], 00:12:10.271 "mp_policy": "active_passive" 00:12:10.271 } 00:12:10.271 } 00:12:10.271 ] 00:12:10.271 16:28:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1480054 00:12:10.271 16:28:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:10.271 16:28:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:10.528 Running I/O for 10 seconds... 00:12:11.464 Latency(us) 00:12:11.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.464 Nvme0n1 : 1.00 14711.00 57.46 0.00 0.00 0.00 0.00 0.00 00:12:11.464 =================================================================================================================== 00:12:11.464 Total : 14711.00 57.46 0.00 0.00 0.00 0.00 0.00 00:12:11.464 00:12:12.401 16:28:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:12.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.401 Nvme0n1 : 2.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:12:12.401 =================================================================================================================== 00:12:12.401 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:12:12.401 00:12:12.659 true 00:12:12.659 16:28:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:12.659 16:28:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:12.918 16:28:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:12.918 16:28:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:12.918 16:28:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1480054 00:12:13.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.486 Nvme0n1 : 3.00 14706.00 57.45 0.00 0.00 0.00 0.00 0.00 00:12:13.486 =================================================================================================================== 00:12:13.486 Total : 14706.00 57.45 0.00 0.00 0.00 0.00 0.00 00:12:13.486 00:12:14.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.421 Nvme0n1 : 4.00 14849.00 58.00 0.00 0.00 0.00 0.00 0.00 00:12:14.421 =================================================================================================================== 00:12:14.421 Total : 14849.00 58.00 0.00 0.00 0.00 0.00 0.00 00:12:14.421 00:12:15.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.357 Nvme0n1 : 5.00 14920.00 58.28 0.00 0.00 0.00 0.00 0.00 00:12:15.357 =================================================================================================================== 00:12:15.357 Total : 14920.00 58.28 0.00 0.00 0.00 0.00 0.00 00:12:15.357 00:12:16.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.737 Nvme0n1 : 6.00 14952.50 58.41 0.00 0.00 0.00 0.00 0.00 00:12:16.737 =================================================================================================================== 00:12:16.737 Total : 14952.50 58.41 0.00 0.00 0.00 0.00 0.00 00:12:16.737 00:12:17.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.672 Nvme0n1 : 7.00 15034.14 58.73 0.00 0.00 0.00 0.00 0.00 00:12:17.672 =================================================================================================================== 00:12:17.672 Total : 15034.14 58.73 0.00 0.00 0.00 0.00 0.00 00:12:17.672 00:12:18.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.612 Nvme0n1 : 8.00 15079.25 58.90 0.00 0.00 0.00 0.00 0.00 00:12:18.612 =================================================================================================================== 00:12:18.612 Total : 15079.25 58.90 0.00 0.00 0.00 0.00 0.00 00:12:18.612 00:12:19.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.549 Nvme0n1 : 9.00 15095.00 58.96 0.00 0.00 0.00 0.00 0.00 00:12:19.549 =================================================================================================================== 00:12:19.549 Total : 15095.00 58.96 0.00 0.00 0.00 0.00 0.00 00:12:19.549 00:12:20.486 00:12:20.487 Latency(us) 00:12:20.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.487 Nvme0n1 : 10.00 15082.60 58.92 0.00 0.00 8482.02 4878.79 17087.91 00:12:20.487 =================================================================================================================== 00:12:20.487 Total : 15082.60 58.92 0.00 0.00 8482.02 4878.79 17087.91 00:12:20.487 0 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1479915 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1479915 ']' 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1479915 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1479915 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1479915' 00:12:20.487 killing process with pid 1479915 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1479915 00:12:20.487 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.487 00:12:20.487 Latency(us) 00:12:20.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.487 =================================================================================================================== 00:12:20.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.487 16:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1479915 00:12:20.745 16:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.003 16:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:21.572 16:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:21.572 16:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1477422 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1477422 00:12:21.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1477422 Killed "${NVMF_APP[@]}" "$@" 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1481385 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1481385 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1481385 ']' 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.572 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:21.831 [2024-07-15 16:29:01.213440] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:21.831 [2024-07-15 16:29:01.213533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.831 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.832 [2024-07-15 16:29:01.278459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.832 [2024-07-15 16:29:01.386667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.832 [2024-07-15 16:29:01.386723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.832 [2024-07-15 16:29:01.386735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.832 [2024-07-15 16:29:01.386746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.832 [2024-07-15 16:29:01.386756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.832 [2024-07-15 16:29:01.386788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.090 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:22.350 [2024-07-15 16:29:01.801941] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:22.350 [2024-07-15 16:29:01.802084] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:22.350 [2024-07-15 16:29:01.802143] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:22.350 16:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:22.609 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b8d6323-617d-453c-8f8b-87a47340c17b -t 2000 00:12:22.869 [ 00:12:22.869 { 00:12:22.869 "name": "3b8d6323-617d-453c-8f8b-87a47340c17b", 00:12:22.869 "aliases": [ 00:12:22.869 "lvs/lvol" 00:12:22.869 ], 00:12:22.869 "product_name": "Logical Volume", 00:12:22.869 "block_size": 4096, 00:12:22.869 "num_blocks": 38912, 00:12:22.869 "uuid": "3b8d6323-617d-453c-8f8b-87a47340c17b", 00:12:22.869 "assigned_rate_limits": { 00:12:22.869 "rw_ios_per_sec": 0, 00:12:22.869 "rw_mbytes_per_sec": 0, 00:12:22.869 "r_mbytes_per_sec": 0, 00:12:22.869 "w_mbytes_per_sec": 0 00:12:22.869 }, 00:12:22.869 "claimed": false, 00:12:22.869 "zoned": false, 00:12:22.869 "supported_io_types": { 00:12:22.869 "read": true, 00:12:22.869 "write": true, 00:12:22.869 "unmap": true, 00:12:22.869 "flush": false, 00:12:22.869 "reset": true, 00:12:22.869 "nvme_admin": false, 00:12:22.869 "nvme_io": false, 00:12:22.869 "nvme_io_md": false, 00:12:22.869 "write_zeroes": true, 00:12:22.869 "zcopy": false, 00:12:22.869 "get_zone_info": false, 00:12:22.869 "zone_management": false, 00:12:22.869 "zone_append": false, 00:12:22.869 "compare": false, 00:12:22.869 "compare_and_write": false, 00:12:22.869 "abort": false, 00:12:22.869 "seek_hole": true, 00:12:22.869 "seek_data": true, 00:12:22.869 "copy": false, 00:12:22.869 "nvme_iov_md": false 00:12:22.869 }, 00:12:22.869 "driver_specific": { 00:12:22.869 "lvol": { 00:12:22.869 "lvol_store_uuid": "4005ba89-fbd0-499e-98cc-e41404dd3aa0", 00:12:22.869 "base_bdev": "aio_bdev", 00:12:22.869 "thin_provision": false, 00:12:22.869 "num_allocated_clusters": 38, 00:12:22.869 "snapshot": false, 00:12:22.869 "clone": false, 00:12:22.869 "esnap_clone": false 00:12:22.869 } 00:12:22.869 } 00:12:22.869 } 00:12:22.869 ] 00:12:22.869 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:22.869 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:22.869 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:23.129 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:23.129 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:23.129 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:23.389 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:23.389 16:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:23.648 [2024-07-15 16:29:03.119098] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:23.648 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:23.944 request: 00:12:23.944 { 00:12:23.944 "uuid": "4005ba89-fbd0-499e-98cc-e41404dd3aa0", 00:12:23.944 "method": "bdev_lvol_get_lvstores", 00:12:23.944 "req_id": 1 00:12:23.944 } 00:12:23.944 Got JSON-RPC error response 00:12:23.944 response: 00:12:23.944 { 00:12:23.944 "code": -19, 00:12:23.944 "message": "No such device" 00:12:23.944 } 00:12:23.944 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:23.944 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.944 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.944 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.944 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:24.206 aio_bdev 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:24.206 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:24.466 16:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b8d6323-617d-453c-8f8b-87a47340c17b -t 2000 00:12:24.724 [ 00:12:24.724 { 00:12:24.724 "name": "3b8d6323-617d-453c-8f8b-87a47340c17b", 00:12:24.724 "aliases": [ 00:12:24.724 "lvs/lvol" 00:12:24.724 ], 00:12:24.724 "product_name": "Logical Volume", 00:12:24.724 "block_size": 4096, 00:12:24.724 "num_blocks": 38912, 00:12:24.724 "uuid": "3b8d6323-617d-453c-8f8b-87a47340c17b", 00:12:24.724 "assigned_rate_limits": { 00:12:24.724 "rw_ios_per_sec": 0, 00:12:24.724 "rw_mbytes_per_sec": 0, 00:12:24.724 "r_mbytes_per_sec": 0, 00:12:24.724 "w_mbytes_per_sec": 0 00:12:24.724 }, 00:12:24.724 "claimed": false, 00:12:24.724 "zoned": false, 00:12:24.724 "supported_io_types": { 00:12:24.724 "read": true, 00:12:24.724 "write": true, 00:12:24.724 "unmap": true, 00:12:24.724 "flush": false, 00:12:24.724 "reset": true, 00:12:24.724 "nvme_admin": false, 00:12:24.724 "nvme_io": false, 00:12:24.724 "nvme_io_md": false, 00:12:24.724 "write_zeroes": true, 00:12:24.724 "zcopy": false, 00:12:24.724 "get_zone_info": false, 00:12:24.724 "zone_management": false, 00:12:24.724 "zone_append": false, 00:12:24.724 "compare": false, 00:12:24.724 "compare_and_write": false, 00:12:24.724 "abort": false, 00:12:24.724 "seek_hole": true, 00:12:24.724 "seek_data": true, 00:12:24.724 "copy": false, 00:12:24.724 "nvme_iov_md": false 00:12:24.724 }, 00:12:24.724 "driver_specific": { 00:12:24.724 "lvol": { 00:12:24.724 "lvol_store_uuid": "4005ba89-fbd0-499e-98cc-e41404dd3aa0", 00:12:24.724 "base_bdev": "aio_bdev", 00:12:24.724 "thin_provision": false, 00:12:24.724 "num_allocated_clusters": 38, 00:12:24.724 "snapshot": false, 00:12:24.724 "clone": false, 00:12:24.724 "esnap_clone": false 00:12:24.724 } 00:12:24.724 } 00:12:24.724 } 00:12:24.724 ] 00:12:24.724 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:24.724 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:24.724 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:24.983 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:24.983 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:24.983 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:25.240 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:25.241 16:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b8d6323-617d-453c-8f8b-87a47340c17b 00:12:25.498 16:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4005ba89-fbd0-499e-98cc-e41404dd3aa0 00:12:26.065 16:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:26.065 16:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:26.065 00:12:26.065 real 0m19.630s 00:12:26.065 user 0m49.449s 00:12:26.065 sys 0m4.690s 00:12:26.065 16:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.065 16:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.065 ************************************ 00:12:26.065 END TEST lvs_grow_dirty 00:12:26.065 ************************************ 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:26.325 nvmf_trace.0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.325 rmmod nvme_tcp 00:12:26.325 rmmod nvme_fabrics 00:12:26.325 rmmod nvme_keyring 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1481385 ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1481385 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1481385 ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1481385 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481385 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481385' 00:12:26.325 killing process with pid 1481385 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1481385 00:12:26.325 16:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1481385 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.583 16:29:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.118 16:29:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.118 00:12:29.118 real 0m42.848s 00:12:29.118 user 1m11.879s 00:12:29.118 sys 0m8.857s 00:12:29.118 16:29:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.118 16:29:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 ************************************ 00:12:29.118 END TEST nvmf_lvs_grow 00:12:29.118 ************************************ 00:12:29.118 16:29:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.118 16:29:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:29.118 16:29:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.118 16:29:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.118 16:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 ************************************ 00:12:29.118 START TEST nvmf_bdev_io_wait 00:12:29.118 ************************************ 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:29.118 * Looking for test storage... 00:12:29.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.118 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.119 16:29:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.027 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:31.028 00:12:31.028 --- 10.0.0.2 ping statistics --- 00:12:31.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.028 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:31.028 00:12:31.028 --- 10.0.0.1 ping statistics --- 00:12:31.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.028 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1483909 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1483909 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1483909 ']' 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.028 16:29:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.028 [2024-07-15 16:29:10.444700] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:31.028 [2024-07-15 16:29:10.444782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.028 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.028 [2024-07-15 16:29:10.519654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.287 [2024-07-15 16:29:10.643791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.287 [2024-07-15 16:29:10.643857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.287 [2024-07-15 16:29:10.643873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.287 [2024-07-15 16:29:10.643895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.287 [2024-07-15 16:29:10.643908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.287 [2024-07-15 16:29:10.644000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.287 [2024-07-15 16:29:10.644057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.287 [2024-07-15 16:29:10.644106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.287 [2024-07-15 16:29:10.644109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.854 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 [2024-07-15 16:29:11.495937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 Malloc0 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 [2024-07-15 16:29:11.556609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1484066 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1484067 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1484070 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.113 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.113 { 00:12:32.113 "params": { 00:12:32.113 "name": "Nvme$subsystem", 00:12:32.113 "trtype": "$TEST_TRANSPORT", 00:12:32.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.113 "adrfam": "ipv4", 00:12:32.113 "trsvcid": "$NVMF_PORT", 00:12:32.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.113 "hdgst": ${hdgst:-false}, 00:12:32.114 "ddgst": ${ddgst:-false} 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 } 00:12:32.114 EOF 00:12:32.114 )") 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1484072 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.114 { 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme$subsystem", 00:12:32.114 "trtype": "$TEST_TRANSPORT", 00:12:32.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "$NVMF_PORT", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.114 "hdgst": ${hdgst:-false}, 00:12:32.114 "ddgst": ${ddgst:-false} 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 } 00:12:32.114 EOF 00:12:32.114 )") 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.114 { 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme$subsystem", 00:12:32.114 "trtype": "$TEST_TRANSPORT", 00:12:32.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "$NVMF_PORT", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.114 "hdgst": ${hdgst:-false}, 00:12:32.114 "ddgst": ${ddgst:-false} 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 } 00:12:32.114 EOF 00:12:32.114 )") 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.114 { 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme$subsystem", 00:12:32.114 "trtype": "$TEST_TRANSPORT", 00:12:32.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "$NVMF_PORT", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.114 "hdgst": ${hdgst:-false}, 00:12:32.114 "ddgst": ${ddgst:-false} 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 } 00:12:32.114 EOF 00:12:32.114 )") 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1484066 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme1", 00:12:32.114 "trtype": "tcp", 00:12:32.114 "traddr": "10.0.0.2", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "4420", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.114 "hdgst": false, 00:12:32.114 "ddgst": false 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 }' 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme1", 00:12:32.114 "trtype": "tcp", 00:12:32.114 "traddr": "10.0.0.2", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "4420", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.114 "hdgst": false, 00:12:32.114 "ddgst": false 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 }' 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme1", 00:12:32.114 "trtype": "tcp", 00:12:32.114 "traddr": "10.0.0.2", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "4420", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.114 "hdgst": false, 00:12:32.114 "ddgst": false 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 }' 00:12:32.114 16:29:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.114 "params": { 00:12:32.114 "name": "Nvme1", 00:12:32.114 "trtype": "tcp", 00:12:32.114 "traddr": "10.0.0.2", 00:12:32.114 "adrfam": "ipv4", 00:12:32.114 "trsvcid": "4420", 00:12:32.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.114 "hdgst": false, 00:12:32.114 "ddgst": false 00:12:32.114 }, 00:12:32.114 "method": "bdev_nvme_attach_controller" 00:12:32.114 }' 00:12:32.114 [2024-07-15 16:29:11.605851] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:32.114 [2024-07-15 16:29:11.605851] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:32.114 [2024-07-15 16:29:11.605851] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:32.114 [2024-07-15 16:29:11.605850] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:32.114 [2024-07-15 16:29:11.605951] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:29:11.605951] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:29:11.605951] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:29:11.605951] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:32.114 --proc-type=auto ] 00:12:32.114 --proc-type=auto ] 00:12:32.114 --proc-type=auto ] 00:12:32.114 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.372 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.372 [2024-07-15 16:29:11.777983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.372 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.372 [2024-07-15 16:29:11.874452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.372 [2024-07-15 16:29:11.875493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.372 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.632 [2024-07-15 16:29:11.973002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:32.632 [2024-07-15 16:29:11.976271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.632 [2024-07-15 16:29:12.074126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.632 [2024-07-15 16:29:12.076981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.632 [2024-07-15 16:29:12.180230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.891 Running I/O for 1 seconds... 00:12:32.891 Running I/O for 1 seconds... 00:12:32.891 Running I/O for 1 seconds... 00:12:32.891 Running I/O for 1 seconds... 00:12:33.827 00:12:33.827 Latency(us) 00:12:33.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.827 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:33.827 Nvme1n1 : 1.00 134774.36 526.46 0.00 0.00 946.00 391.40 1401.74 00:12:33.827 =================================================================================================================== 00:12:33.827 Total : 134774.36 526.46 0.00 0.00 946.00 391.40 1401.74 00:12:33.827 00:12:33.827 Latency(us) 00:12:33.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.827 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:33.827 Nvme1n1 : 1.01 11439.17 44.68 0.00 0.00 11148.72 6068.15 16505.36 00:12:33.827 =================================================================================================================== 00:12:33.827 Total : 11439.17 44.68 0.00 0.00 11148.72 6068.15 16505.36 00:12:34.085 00:12:34.085 Latency(us) 00:12:34.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.085 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:34.085 Nvme1n1 : 1.01 8554.82 33.42 0.00 0.00 14888.77 8835.22 23204.60 00:12:34.085 =================================================================================================================== 00:12:34.085 Total : 8554.82 33.42 0.00 0.00 14888.77 8835.22 23204.60 00:12:34.085 00:12:34.085 Latency(us) 00:12:34.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.085 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:34.085 Nvme1n1 : 1.01 8709.64 34.02 0.00 0.00 14637.62 6116.69 25243.50 00:12:34.085 =================================================================================================================== 00:12:34.085 Total : 8709.64 34.02 0.00 0.00 14637.62 6116.69 25243.50 00:12:34.343 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1484067 00:12:34.343 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1484070 00:12:34.343 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1484072 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.344 rmmod nvme_tcp 00:12:34.344 rmmod nvme_fabrics 00:12:34.344 rmmod nvme_keyring 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1483909 ']' 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1483909 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1483909 ']' 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1483909 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1483909 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1483909' 00:12:34.344 killing process with pid 1483909 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1483909 00:12:34.344 16:29:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1483909 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.603 16:29:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.145 16:29:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.145 00:12:37.145 real 0m8.037s 00:12:37.145 user 0m19.762s 00:12:37.145 sys 0m3.910s 00:12:37.145 16:29:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.145 16:29:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.145 ************************************ 00:12:37.145 END TEST nvmf_bdev_io_wait 00:12:37.145 ************************************ 00:12:37.145 16:29:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.145 16:29:16 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:37.145 16:29:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.145 16:29:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.145 16:29:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.145 ************************************ 00:12:37.145 START TEST nvmf_queue_depth 00:12:37.145 ************************************ 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:37.145 * Looking for test storage... 00:12:37.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.145 16:29:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:39.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.050 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:39.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:39.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:39.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:39.051 00:12:39.051 --- 10.0.0.2 ping statistics --- 00:12:39.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.051 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:12:39.051 00:12:39.051 --- 10.0.0.1 ping statistics --- 00:12:39.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.051 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1486297 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1486297 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1486297 ']' 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.051 16:29:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:39.051 [2024-07-15 16:29:18.395415] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:39.051 [2024-07-15 16:29:18.395489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.051 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.051 [2024-07-15 16:29:18.466259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.051 [2024-07-15 16:29:18.582028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.051 [2024-07-15 16:29:18.582078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.051 [2024-07-15 16:29:18.582100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.051 [2024-07-15 16:29:18.582111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.051 [2024-07-15 16:29:18.582121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.051 [2024-07-15 16:29:18.582177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.019 [2024-07-15 16:29:19.370357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.019 Malloc0 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.019 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.020 [2024-07-15 16:29:19.431538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1486442 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1486442 /var/tmp/bdevperf.sock 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1486442 ']' 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.020 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.020 [2024-07-15 16:29:19.479710] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:12:40.020 [2024-07-15 16:29:19.479783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486442 ] 00:12:40.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.020 [2024-07-15 16:29:19.544709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.279 [2024-07-15 16:29:19.665893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.279 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.279 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:40.279 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:40.279 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.279 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.540 NVMe0n1 00:12:40.540 16:29:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.540 16:29:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:40.540 Running I/O for 10 seconds... 00:12:50.523 00:12:50.523 Latency(us) 00:12:50.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.523 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:50.523 Verification LBA range: start 0x0 length 0x4000 00:12:50.523 NVMe0n1 : 10.09 8494.63 33.18 0.00 0.00 119976.91 24563.86 73400.32 00:12:50.523 =================================================================================================================== 00:12:50.523 Total : 8494.63 33.18 0.00 0.00 119976.91 24563.86 73400.32 00:12:50.523 0 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1486442 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1486442 ']' 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1486442 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486442 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486442' 00:12:50.781 killing process with pid 1486442 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1486442 00:12:50.781 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.781 00:12:50.781 Latency(us) 00:12:50.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.781 =================================================================================================================== 00:12:50.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.781 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1486442 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.040 rmmod nvme_tcp 00:12:51.040 rmmod nvme_fabrics 00:12:51.040 rmmod nvme_keyring 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1486297 ']' 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1486297 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1486297 ']' 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1486297 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486297 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486297' 00:12:51.040 killing process with pid 1486297 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1486297 00:12:51.040 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1486297 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.298 16:29:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.840 16:29:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.840 00:12:53.840 real 0m16.619s 00:12:53.840 user 0m23.434s 00:12:53.840 sys 0m2.989s 00:12:53.840 16:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.840 16:29:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:53.840 ************************************ 00:12:53.840 END TEST nvmf_queue_depth 00:12:53.840 ************************************ 00:12:53.840 16:29:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.840 16:29:32 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:53.840 16:29:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.840 16:29:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.840 16:29:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.840 ************************************ 00:12:53.840 START TEST nvmf_target_multipath 00:12:53.840 ************************************ 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:53.840 * Looking for test storage... 00:12:53.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.840 16:29:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.749 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.750 16:29:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:12:55.750 00:12:55.750 --- 10.0.0.2 ping statistics --- 00:12:55.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.750 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:55.750 00:12:55.750 --- 10.0.0.1 ping statistics --- 00:12:55.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.750 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:55.750 only one NIC for nvmf test 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.750 rmmod nvme_tcp 00:12:55.750 rmmod nvme_fabrics 00:12:55.750 rmmod nvme_keyring 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.750 16:29:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.659 16:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.660 16:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.660 16:29:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.660 00:12:57.660 real 0m4.261s 00:12:57.660 user 0m0.799s 00:12:57.660 sys 0m1.456s 00:12:57.660 16:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.660 16:29:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.660 ************************************ 00:12:57.660 END TEST nvmf_target_multipath 00:12:57.660 ************************************ 00:12:57.660 16:29:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.660 16:29:37 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:57.660 16:29:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.660 16:29:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.660 16:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.660 ************************************ 00:12:57.660 START TEST nvmf_zcopy 00:12:57.660 ************************************ 00:12:57.660 16:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:57.918 * Looking for test storage... 00:12:57.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.918 16:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:59.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:59.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:59.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:59.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:12:59.821 00:12:59.821 --- 10.0.0.2 ping statistics --- 00:12:59.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.821 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:59.821 00:12:59.821 --- 10.0.0.1 ping statistics --- 00:12:59.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.821 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1491618 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1491618 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.821 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1491618 ']' 00:12:59.822 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.822 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.822 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.822 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.822 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.081 [2024-07-15 16:29:39.458379] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:13:00.081 [2024-07-15 16:29:39.458444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.081 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.081 [2024-07-15 16:29:39.519397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.081 [2024-07-15 16:29:39.628114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.081 [2024-07-15 16:29:39.628186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.081 [2024-07-15 16:29:39.628200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.081 [2024-07-15 16:29:39.628210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.081 [2024-07-15 16:29:39.628234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.081 [2024-07-15 16:29:39.628262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.339 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 [2024-07-15 16:29:39.777009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 [2024-07-15 16:29:39.793207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 malloc0 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.340 { 00:13:00.340 "params": { 00:13:00.340 "name": "Nvme$subsystem", 00:13:00.340 "trtype": "$TEST_TRANSPORT", 00:13:00.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.340 "adrfam": "ipv4", 00:13:00.340 "trsvcid": "$NVMF_PORT", 00:13:00.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.340 "hdgst": ${hdgst:-false}, 00:13:00.340 "ddgst": ${ddgst:-false} 00:13:00.340 }, 00:13:00.340 "method": "bdev_nvme_attach_controller" 00:13:00.340 } 00:13:00.340 EOF 00:13:00.340 )") 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:00.340 16:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.340 "params": { 00:13:00.340 "name": "Nvme1", 00:13:00.340 "trtype": "tcp", 00:13:00.340 "traddr": "10.0.0.2", 00:13:00.340 "adrfam": "ipv4", 00:13:00.340 "trsvcid": "4420", 00:13:00.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.340 "hdgst": false, 00:13:00.340 "ddgst": false 00:13:00.340 }, 00:13:00.340 "method": "bdev_nvme_attach_controller" 00:13:00.340 }' 00:13:00.340 [2024-07-15 16:29:39.878201] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:13:00.340 [2024-07-15 16:29:39.878286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491639 ] 00:13:00.340 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.600 [2024-07-15 16:29:39.947002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.600 [2024-07-15 16:29:40.069553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.860 Running I/O for 10 seconds... 00:13:10.882 00:13:10.882 Latency(us) 00:13:10.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.882 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:10.882 Verification LBA range: start 0x0 length 0x1000 00:13:10.882 Nvme1n1 : 10.02 5754.94 44.96 0.00 0.00 22180.31 3034.07 30486.38 00:13:10.882 =================================================================================================================== 00:13:10.882 Total : 5754.94 44.96 0.00 0.00 22180.31 3034.07 30486.38 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1492953 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:11.142 { 00:13:11.142 "params": { 00:13:11.142 "name": "Nvme$subsystem", 00:13:11.142 "trtype": "$TEST_TRANSPORT", 00:13:11.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.142 "adrfam": "ipv4", 00:13:11.142 "trsvcid": "$NVMF_PORT", 00:13:11.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.142 "hdgst": ${hdgst:-false}, 00:13:11.142 "ddgst": ${ddgst:-false} 00:13:11.142 }, 00:13:11.142 "method": "bdev_nvme_attach_controller" 00:13:11.142 } 00:13:11.142 EOF 00:13:11.142 )") 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:11.142 [2024-07-15 16:29:50.615432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.615483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:11.142 16:29:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.142 "params": { 00:13:11.142 "name": "Nvme1", 00:13:11.142 "trtype": "tcp", 00:13:11.142 "traddr": "10.0.0.2", 00:13:11.142 "adrfam": "ipv4", 00:13:11.142 "trsvcid": "4420", 00:13:11.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.142 "hdgst": false, 00:13:11.142 "ddgst": false 00:13:11.142 }, 00:13:11.142 "method": "bdev_nvme_attach_controller" 00:13:11.142 }' 00:13:11.142 [2024-07-15 16:29:50.623384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.623411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.631404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.631429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.639423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.639448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.647443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.647468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.654265] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:13:11.142 [2024-07-15 16:29:50.654332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492953 ] 00:13:11.142 [2024-07-15 16:29:50.655464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.655489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.663487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.663512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.671512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.671537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.679532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.679556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.142 [2024-07-15 16:29:50.687554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.687578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.695577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.695601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.703603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.703627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.711621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.711646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.719628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.719648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.721883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.142 [2024-07-15 16:29:50.727672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.727700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.142 [2024-07-15 16:29:50.735729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.142 [2024-07-15 16:29:50.735771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.743695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.743716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.751716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.751735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.759736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.759755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.767757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.767777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.775778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.775797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.783809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.783844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.791890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.791930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.799844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.799886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.807886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.807908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.815909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.815931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.823934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.823955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.831944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.831965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.838934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.403 [2024-07-15 16:29:50.839966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.839987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.847989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.848010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.856044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.856079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.864093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.864137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.872105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.872144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.880134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.880189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.888147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.888203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.896179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.896216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.904199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.904254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.912186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.912208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.920243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.920279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.928261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.928314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.936243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.936265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.944275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.944295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.952279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.952299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.960322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.960346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.968343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.968366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.976363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.976385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.984387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.984410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.403 [2024-07-15 16:29:50.992407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.403 [2024-07-15 16:29:50.992427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.000433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.000456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.008451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.008471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.016472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.016492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.024497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.024517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.032520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.032542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.040544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.040566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.048563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.048584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.057683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.057709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.064613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.064635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 Running I/O for 5 seconds... 00:13:11.664 [2024-07-15 16:29:51.072634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.072656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.087577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.087614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.099572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.099599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.111237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.111265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.122919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.122947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.134918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.134946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.146828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.146856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.159025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.159053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.170523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.170549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.182155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.182182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.194366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.194393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.206045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.206074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.217413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.217441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.229356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.229384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.240980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.241007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.664 [2024-07-15 16:29:51.252333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.664 [2024-07-15 16:29:51.252360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.263642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.263670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.274835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.274862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.286708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.286735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.298308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.298335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.309608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.309643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.321571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.321602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.334302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.334332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.346742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.346772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.358791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.358819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.371539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.371570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.383574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.383603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.396181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.396211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.408249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.408279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.420165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.420194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.432800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.432831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.445430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.445461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.458294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.458325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.470846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.470885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.483781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.483812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.496754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.496784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.509022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.509049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.926 [2024-07-15 16:29:51.521327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.926 [2024-07-15 16:29:51.521358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.533733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.533763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.546238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.546264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.558660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.558690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.571043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.571070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.583127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.583155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.595948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.595976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.608254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.608284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.620523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.620555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.632783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.632815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.644711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.644742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.657358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.657389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.670078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.670106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.682706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.682737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.695208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.695238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.708156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.708184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.721365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.721396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.734607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.734638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.747701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.747732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.760107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.760135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.187 [2024-07-15 16:29:51.772811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.187 [2024-07-15 16:29:51.772842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.785669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.785700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.798702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.798732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.810980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.811007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.823383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.823413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.835940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.835967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.848705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.848735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.861016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.861044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.874040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.874067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.886670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.886700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.899292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.899322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.911962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.911990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.925081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.925108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.938470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.938501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.951301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.951333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.964056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.964083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.976395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.976425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:51.989158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:51.989202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:52.001518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:52.001548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:52.013990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:52.014018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:52.026237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:52.026268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.448 [2024-07-15 16:29:52.038696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.448 [2024-07-15 16:29:52.038727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.051556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.051587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.063930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.063957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.076360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.076390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.088850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.088889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.101139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.101167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.113590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.113619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.125867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.125906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.138175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.138219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.151148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.151176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.164073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.164100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.176299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.176330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.188765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.188795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.201366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.201396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.213989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.214016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.227033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.227060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.240116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.240146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.252595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.252626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.265357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.265388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.277912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.277958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.289843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.289874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.708 [2024-07-15 16:29:52.302350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.708 [2024-07-15 16:29:52.302380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.315347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.315374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.326744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.326772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.338633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.338660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.350641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.350667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.362660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.362687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.374379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.374420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.386319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.386347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.398104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.398132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.409570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.409610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.421387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.421415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.432935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.432963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.444501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.444528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.456250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.456276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.469736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.469763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.480889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.480922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.492444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.492471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.504599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.504627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.517008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.517036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.528794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.528821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.540023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.540051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.551903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.551933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.968 [2024-07-15 16:29:52.563817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.968 [2024-07-15 16:29:52.563860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.576024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.576052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.587362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.587389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.599113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.599142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.610630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.610658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.622355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.622382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.634335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.634362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.645812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.645839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.658018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.658045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.669566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.669608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.681374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.681401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.692472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.692499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.704317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.704353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.716728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.716754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.729010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.729038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.741114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.741142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.752408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.752450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.764124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.764167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.776844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.776894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.789442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.789469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.801601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.801629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.227 [2024-07-15 16:29:52.813351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.227 [2024-07-15 16:29:52.813378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.824622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.824650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.838166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.838194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.849075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.849102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.861668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.861694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.873338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.873365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.884731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.884775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.896248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.896275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.908016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.908044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.919772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.919798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.934058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.934095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.944780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.944807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.956224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.956251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.967907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.967941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.979618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.979645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:52.991484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:52.991510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.003143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.003185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.015124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.015152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.027636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.027663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.040361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.040388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.052995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.053022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.064872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.064923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.487 [2024-07-15 16:29:53.077405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.487 [2024-07-15 16:29:53.077433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.089617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.089645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.101176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.101204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.112623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.112650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.124334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.124361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.136503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.136530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.148381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.148409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.159955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.159991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.172663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.172694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.185728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.185758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.198901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.198929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.211227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.211258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.223035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.223063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.235226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.235256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.247385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.247415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.259822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.259853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.272109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.272137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.284791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.284822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.297173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.297200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.309359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.309391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.322117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.322144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.750 [2024-07-15 16:29:53.334552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.750 [2024-07-15 16:29:53.334582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.347218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.347249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.359729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.359761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.373173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.373200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.385719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.385751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.398679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.398711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.411522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.411552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.423963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.424000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.437062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.437090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.449398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.449429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.461278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.461309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.473525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.473555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.485442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.485473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.497924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.497967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.510137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.510164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.522795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.522827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.535127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.535169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.547323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.547354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.559712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.559742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.572356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.572387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.585073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.585101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.011 [2024-07-15 16:29:53.597978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.011 [2024-07-15 16:29:53.598006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.610695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.610727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.622908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.622956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.635032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.635061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.647210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.647238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.659628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.659659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.672596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.672628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.685385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.685416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.698472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.698503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.710968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.710997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.723312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.723343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.735846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.735883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.748615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.748645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.761470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.761501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.774110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.774137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.787198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.787228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.799505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.799536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.812666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.812697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.825194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.825224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.837989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.838016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.850702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.850732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.270 [2024-07-15 16:29:53.863918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.270 [2024-07-15 16:29:53.863961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.877040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.877069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.889814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.889845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.902031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.902058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.914271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.914301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.927093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.927121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.939566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.939597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.952149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.952193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.964484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.964514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.977681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.977711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:53.990157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:53.990185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.002873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.002928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.015585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.015615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.028527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.028559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.040894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.040938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.053063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.053091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.065440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.065471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.077584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.077612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.089442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.089470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.101079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.101107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.113005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.113033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.530 [2024-07-15 16:29:54.124804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.530 [2024-07-15 16:29:54.124831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.136729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.136756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.148614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.148640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.160147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.160175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.172129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.172157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.183851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.183902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.195833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.195860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.210054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.210081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.220503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.220530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.233111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.233139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.244908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.244935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.256939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.256967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.269775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.269801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.281952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.281981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.294587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.294614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.306348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.306376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.317993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.318021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.329432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.329474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.341030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.341058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.352341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.352369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.363780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.363806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.375122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.375150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.791 [2024-07-15 16:29:54.386477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.791 [2024-07-15 16:29:54.386520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.398464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.398506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.411834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.411863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.422535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.422561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.435408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.435436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.447261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.447288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.459911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.459939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.471836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.471885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.483952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.483979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.495745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.495771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.507775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.507802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.519759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.519787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.531623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.531650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.543305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.543333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.555008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.555041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.566335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.566361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.580002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.580029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.590935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.590963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.603231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.603258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.615156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.615197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.627316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.627342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.052 [2024-07-15 16:29:54.639663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.052 [2024-07-15 16:29:54.639705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.651964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.651992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.663748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.663774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.674982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.675010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.686545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.686572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.698213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.698240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.710116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.710144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.724129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.724156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.735437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.735463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.747479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.747505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.759185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.759213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.771935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.771964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.784376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.784412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.795983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.796011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.808186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.808213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.820064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.820092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.832331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.832359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.844610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.844639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.856835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.856866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.869971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.869998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.882662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.882693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.895296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.895327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.312 [2024-07-15 16:29:54.907684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.312 [2024-07-15 16:29:54.907714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.920012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.920045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.933030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.933058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.945753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.945784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.958656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.958686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.971421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.971451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.984104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.984131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:54.996639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:54.996670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.008831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.008861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.020807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.020847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.033172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.033203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.045641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.045671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.058734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.058764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.071385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.071415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.083706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.083737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.096323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.096353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.109071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.109100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.121324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.121355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.133818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.133849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.146232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.146262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.572 [2024-07-15 16:29:55.158976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.572 [2024-07-15 16:29:55.159004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.171934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.171963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.184650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.184681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.196667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.196697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.209037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.209064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.221967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.222004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.234697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.234728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.247234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.247260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.259988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.260025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.272639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.272670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.285420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.285451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.298155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.298182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.310393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.310424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.322680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.322711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.335061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.335088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.347386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.347417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.359961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.359989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.372424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.372454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.385128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.385155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.397589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.397619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.410235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.410266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.832 [2024-07-15 16:29:55.422825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.832 [2024-07-15 16:29:55.422856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.435193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.435237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.447807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.447838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.460419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.460450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.473330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.473360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.486013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.486041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.498833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.498864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.511639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.511669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.524565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.524596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.537047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.537074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.549164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.549207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.561107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.561135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.573820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.573851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.586262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.586292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.598792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.598822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.611219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.611250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.623508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.623539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.635771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.635801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.647844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.647874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.660323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.660354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.672411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.672442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.093 [2024-07-15 16:29:55.685128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.093 [2024-07-15 16:29:55.685156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.697955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.697983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.710030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.710058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.722682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.722713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.735202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.735246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.747492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.747522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.759804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.759834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.772373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.772403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.784742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.784772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.797535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.797565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.810253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.352 [2024-07-15 16:29:55.810283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.352 [2024-07-15 16:29:55.822064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.822092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.833766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.833795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.845872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.845908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.858112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.858140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.869854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.869906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.881578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.881605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.893387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.893413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.905019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.905047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.920773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.920803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.932323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.932351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.353 [2024-07-15 16:29:55.943736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.353 [2024-07-15 16:29:55.943779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:55.955229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:55.955257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:55.966983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:55.967012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:55.978815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:55.978842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:55.990242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:55.990270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.001917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.001945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.013891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.013919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.025564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.025591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.037390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.037416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.049205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.049248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.061445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.061487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.073328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.073355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.085208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.085235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.093562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.093588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 00:13:16.613 Latency(us) 00:13:16.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.613 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:16.613 Nvme1n1 : 5.01 10380.65 81.10 0.00 0.00 12313.67 5267.15 23787.14 00:13:16.613 =================================================================================================================== 00:13:16.613 Total : 10380.65 81.10 0.00 0.00 12313.67 5267.15 23787.14 00:13:16.613 [2024-07-15 16:29:56.100482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.100507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.108487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.108510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.116513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.116537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.124601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.124665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.132613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.132664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.140633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.140680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.148653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.148699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.156680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.156730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.164708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.164755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.172725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.172770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.180745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.180796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.188769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.188814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.196796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.196841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.613 [2024-07-15 16:29:56.204826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.613 [2024-07-15 16:29:56.204895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.212843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.212897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.220863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.220918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.228886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.228934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.236927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.236972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.244896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.244918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.252916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.252938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.260937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.260960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.268955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.268978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.277028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.277082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.285051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.285097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.293038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.293068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.301040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.301062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.309062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.309084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.317083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.317105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.325132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.325165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.333182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.333227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.341211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.341259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.349187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.349208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.357206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.357240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 [2024-07-15 16:29:56.365242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.874 [2024-07-15 16:29:56.365261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1492953) - No such process 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1492953 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.874 delay0 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.874 16:29:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:16.874 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.134 [2024-07-15 16:29:56.524082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.731 Initializing NVMe Controllers 00:13:23.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.731 Initialization complete. Launching workers. 00:13:23.731 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 247 00:13:23.731 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 534, failed to submit 33 00:13:23.731 success 359, unsuccess 175, failed 0 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.731 rmmod nvme_tcp 00:13:23.731 rmmod nvme_fabrics 00:13:23.731 rmmod nvme_keyring 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1491618 ']' 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1491618 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1491618 ']' 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1491618 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491618 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491618' 00:13:23.731 killing process with pid 1491618 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1491618 00:13:23.731 16:30:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1491618 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.731 16:30:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.635 16:30:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.635 00:13:25.635 real 0m27.828s 00:13:25.635 user 0m40.480s 00:13:25.635 sys 0m8.524s 00:13:25.635 16:30:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.635 16:30:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:25.635 ************************************ 00:13:25.635 END TEST nvmf_zcopy 00:13:25.635 ************************************ 00:13:25.635 16:30:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:25.635 16:30:05 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:25.635 16:30:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.635 16:30:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.635 16:30:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.635 ************************************ 00:13:25.635 START TEST nvmf_nmic 00:13:25.635 ************************************ 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:25.635 * Looking for test storage... 00:13:25.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.635 16:30:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.569 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.570 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:27.829 00:13:27.829 --- 10.0.0.2 ping statistics --- 00:13:27.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.829 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:27.829 00:13:27.829 --- 10.0.0.1 ping statistics --- 00:13:27.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.829 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1496340 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1496340 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1496340 ']' 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.829 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:27.829 [2024-07-15 16:30:07.249068] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:13:27.829 [2024-07-15 16:30:07.249159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.829 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.829 [2024-07-15 16:30:07.318332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.089 [2024-07-15 16:30:07.434319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.089 [2024-07-15 16:30:07.434381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.089 [2024-07-15 16:30:07.434394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.089 [2024-07-15 16:30:07.434405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.089 [2024-07-15 16:30:07.434414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.089 [2024-07-15 16:30:07.434491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.089 [2024-07-15 16:30:07.434567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.089 [2024-07-15 16:30:07.434625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.089 [2024-07-15 16:30:07.434627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 [2024-07-15 16:30:07.591748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 Malloc0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 [2024-07-15 16:30:07.644974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:28.089 test case1: single bdev can't be used in multiple subsystems 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.089 [2024-07-15 16:30:07.668786] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:28.089 [2024-07-15 16:30:07.668814] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:28.089 [2024-07-15 16:30:07.668844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.089 request: 00:13:28.089 { 00:13:28.089 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:28.089 "namespace": { 00:13:28.089 "bdev_name": "Malloc0", 00:13:28.089 "no_auto_visible": false 00:13:28.089 }, 00:13:28.089 "method": "nvmf_subsystem_add_ns", 00:13:28.089 "req_id": 1 00:13:28.089 } 00:13:28.089 Got JSON-RPC error response 00:13:28.089 response: 00:13:28.089 { 00:13:28.089 "code": -32602, 00:13:28.089 "message": "Invalid parameters" 00:13:28.089 } 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:28.089 Adding namespace failed - expected result. 00:13:28.089 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:28.089 test case2: host connect to nvmf target in multiple paths 00:13:28.090 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:28.090 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.090 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.090 [2024-07-15 16:30:07.676916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:28.090 16:30:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.090 16:30:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.022 16:30:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:29.588 16:30:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.588 16:30:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.588 16:30:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.588 16:30:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.588 16:30:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:31.487 16:30:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:31.487 [global] 00:13:31.487 thread=1 00:13:31.487 invalidate=1 00:13:31.487 rw=write 00:13:31.487 time_based=1 00:13:31.487 runtime=1 00:13:31.487 ioengine=libaio 00:13:31.487 direct=1 00:13:31.487 bs=4096 00:13:31.487 iodepth=1 00:13:31.487 norandommap=0 00:13:31.487 numjobs=1 00:13:31.487 00:13:31.487 verify_dump=1 00:13:31.487 verify_backlog=512 00:13:31.487 verify_state_save=0 00:13:31.487 do_verify=1 00:13:31.487 verify=crc32c-intel 00:13:31.487 [job0] 00:13:31.487 filename=/dev/nvme0n1 00:13:31.745 Could not set queue depth (nvme0n1) 00:13:31.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.745 fio-3.35 00:13:31.745 Starting 1 thread 00:13:33.138 00:13:33.138 job0: (groupid=0, jobs=1): err= 0: pid=1497475: Mon Jul 15 16:30:12 2024 00:13:33.138 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:13:33.138 slat (nsec): min=9054, max=42939, avg=28408.10, stdev=9209.27 00:13:33.138 clat (usec): min=40746, max=41005, avg=40947.18, stdev=58.10 00:13:33.138 lat (usec): min=40755, max=41032, avg=40975.58, stdev=60.10 00:13:33.138 clat percentiles (usec): 00:13:33.138 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:33.138 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:33.138 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:33.138 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:33.138 | 99.99th=[41157] 00:13:33.138 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:33.138 slat (usec): min=8, max=30727, avg=78.72, stdev=1357.17 00:13:33.138 clat (usec): min=177, max=330, avg=205.49, stdev=15.45 00:13:33.138 lat (usec): min=185, max=30981, avg=284.20, stdev=1359.41 00:13:33.138 clat percentiles (usec): 00:13:33.138 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:13:33.138 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:13:33.138 | 70.00th=[ 208], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:13:33.138 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 330], 99.95th=[ 330], 00:13:33.138 | 99.99th=[ 330] 00:13:33.138 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:33.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:33.138 lat (usec) : 250=92.31%, 500=3.75% 00:13:33.138 lat (msec) : 50=3.94% 00:13:33.138 cpu : usr=0.60%, sys=0.79%, ctx=536, majf=0, minf=2 00:13:33.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.138 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.138 00:13:33.138 Run status group 0 (all jobs): 00:13:33.138 READ: bw=83.2KiB/s (85.2kB/s), 83.2KiB/s-83.2KiB/s (85.2kB/s-85.2kB/s), io=84.0KiB (86.0kB), run=1009-1009msec 00:13:33.138 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:13:33.138 00:13:33.138 Disk stats (read/write): 00:13:33.138 nvme0n1: ios=44/512, merge=0/0, ticks=1724/102, in_queue=1826, util=98.70% 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.139 rmmod nvme_tcp 00:13:33.139 rmmod nvme_fabrics 00:13:33.139 rmmod nvme_keyring 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1496340 ']' 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1496340 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1496340 ']' 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1496340 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1496340 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1496340' 00:13:33.139 killing process with pid 1496340 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1496340 00:13:33.139 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1496340 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.397 16:30:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.937 16:30:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.937 00:13:35.937 real 0m9.917s 00:13:35.937 user 0m22.844s 00:13:35.937 sys 0m2.172s 00:13:35.937 16:30:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.937 16:30:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.937 ************************************ 00:13:35.937 END TEST nvmf_nmic 00:13:35.937 ************************************ 00:13:35.937 16:30:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.937 16:30:15 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:35.937 16:30:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.937 16:30:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.937 16:30:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.937 ************************************ 00:13:35.937 START TEST nvmf_fio_target 00:13:35.937 ************************************ 00:13:35.937 16:30:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:35.937 * Looking for test storage... 00:13:35.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.938 16:30:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:37.843 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:37.843 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:37.843 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:37.843 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:37.843 00:13:37.843 --- 10.0.0.2 ping statistics --- 00:13:37.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.843 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:13:37.843 00:13:37.843 --- 10.0.0.1 ping statistics --- 00:13:37.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.843 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1499553 00:13:37.843 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1499553 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1499553 ']' 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.844 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.844 [2024-07-15 16:30:17.365434] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:13:37.844 [2024-07-15 16:30:17.365518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.844 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.844 [2024-07-15 16:30:17.431997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.102 [2024-07-15 16:30:17.545968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.102 [2024-07-15 16:30:17.546030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.102 [2024-07-15 16:30:17.546047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.102 [2024-07-15 16:30:17.546060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.102 [2024-07-15 16:30:17.546071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.102 [2024-07-15 16:30:17.546130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.102 [2024-07-15 16:30:17.546193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.102 [2024-07-15 16:30:17.546255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.102 [2024-07-15 16:30:17.546258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.102 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.102 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:38.102 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.102 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.102 16:30:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.361 16:30:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.361 16:30:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:38.619 [2024-07-15 16:30:17.989777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.619 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.877 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:38.877 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.136 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:39.136 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.394 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:39.394 16:30:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.652 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:39.652 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:39.910 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.168 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:40.168 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.426 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:40.426 16:30:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.684 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:40.684 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:40.942 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.200 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:41.200 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.458 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:41.458 16:30:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:41.716 16:30:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.974 [2024-07-15 16:30:21.422266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.974 16:30:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:42.232 16:30:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:42.496 16:30:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:43.113 16:30:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:45.016 16:30:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:45.016 [global] 00:13:45.016 thread=1 00:13:45.016 invalidate=1 00:13:45.016 rw=write 00:13:45.016 time_based=1 00:13:45.016 runtime=1 00:13:45.016 ioengine=libaio 00:13:45.016 direct=1 00:13:45.016 bs=4096 00:13:45.016 iodepth=1 00:13:45.016 norandommap=0 00:13:45.016 numjobs=1 00:13:45.016 00:13:45.016 verify_dump=1 00:13:45.016 verify_backlog=512 00:13:45.016 verify_state_save=0 00:13:45.016 do_verify=1 00:13:45.016 verify=crc32c-intel 00:13:45.016 [job0] 00:13:45.016 filename=/dev/nvme0n1 00:13:45.016 [job1] 00:13:45.016 filename=/dev/nvme0n2 00:13:45.016 [job2] 00:13:45.016 filename=/dev/nvme0n3 00:13:45.016 [job3] 00:13:45.016 filename=/dev/nvme0n4 00:13:45.016 Could not set queue depth (nvme0n1) 00:13:45.016 Could not set queue depth (nvme0n2) 00:13:45.016 Could not set queue depth (nvme0n3) 00:13:45.016 Could not set queue depth (nvme0n4) 00:13:45.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.275 fio-3.35 00:13:45.275 Starting 4 threads 00:13:46.652 00:13:46.652 job0: (groupid=0, jobs=1): err= 0: pid=1500633: Mon Jul 15 16:30:25 2024 00:13:46.652 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:13:46.652 slat (nsec): min=14654, max=33735, avg=21065.05, stdev=7476.39 00:13:46.652 clat (usec): min=40927, max=41224, avg=40981.24, stdev=58.55 00:13:46.652 lat (usec): min=40958, max=41246, avg=41002.30, stdev=58.41 00:13:46.652 clat percentiles (usec): 00:13:46.653 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:46.653 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:46.653 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:46.653 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:46.653 | 99.99th=[41157] 00:13:46.653 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:13:46.653 slat (nsec): min=6148, max=40219, avg=14051.66, stdev=5772.25 00:13:46.653 clat (usec): min=188, max=447, avg=240.42, stdev=30.41 00:13:46.653 lat (usec): min=203, max=464, avg=254.47, stdev=30.29 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:13:46.653 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:13:46.653 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 277], 00:13:46.653 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 449], 99.95th=[ 449], 00:13:46.653 | 99.99th=[ 449] 00:13:46.653 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.653 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.653 lat (usec) : 250=65.36%, 500=30.52% 00:13:46.653 lat (msec) : 50=4.12% 00:13:46.653 cpu : usr=0.19%, sys=0.77%, ctx=534, majf=0, minf=2 00:13:46.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.653 job1: (groupid=0, jobs=1): err= 0: pid=1500634: Mon Jul 15 16:30:25 2024 00:13:46.653 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:13:46.653 slat (nsec): min=6713, max=32990, avg=20410.57, stdev=8309.64 00:13:46.653 clat (usec): min=40912, max=42016, avg=41403.42, stdev=500.08 00:13:46.653 lat (usec): min=40930, max=42040, avg=41423.83, stdev=501.14 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:46.653 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:46.653 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:46.653 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:46.653 | 99.99th=[42206] 00:13:46.653 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:13:46.653 slat (nsec): min=6805, max=41545, avg=15576.04, stdev=6304.15 00:13:46.653 clat (usec): min=188, max=2207, avg=241.20, stdev=90.98 00:13:46.653 lat (usec): min=201, max=2215, avg=256.78, stdev=90.49 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:13:46.653 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 245], 00:13:46.653 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:13:46.653 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 2212], 99.95th=[ 2212], 00:13:46.653 | 99.99th=[ 2212] 00:13:46.653 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.653 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.653 lat (usec) : 250=64.54%, 500=31.33% 00:13:46.653 lat (msec) : 4=0.19%, 50=3.94% 00:13:46.653 cpu : usr=0.50%, sys=0.70%, ctx=533, majf=0, minf=1 00:13:46.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.653 job2: (groupid=0, jobs=1): err= 0: pid=1500635: Mon Jul 15 16:30:25 2024 00:13:46.653 read: IOPS=850, BW=3401KiB/s (3482kB/s)(3404KiB/1001msec) 00:13:46.653 slat (nsec): min=5183, max=58832, avg=12386.82, stdev=6057.44 00:13:46.653 clat (usec): min=278, max=41965, avg=860.42, stdev=4173.97 00:13:46.653 lat (usec): min=283, max=41982, avg=872.81, stdev=4175.42 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 322], 00:13:46.653 | 30.00th=[ 334], 40.00th=[ 424], 50.00th=[ 453], 60.00th=[ 469], 00:13:46.653 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:13:46.653 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:46.653 | 99.99th=[42206] 00:13:46.653 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:46.653 slat (nsec): min=6297, max=44895, avg=11879.23, stdev=5075.25 00:13:46.653 clat (usec): min=182, max=578, avg=233.18, stdev=31.41 00:13:46.653 lat (usec): min=190, max=589, avg=245.06, stdev=33.03 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 212], 00:13:46.653 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:13:46.653 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:13:46.653 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 578], 00:13:46.653 | 99.99th=[ 578] 00:13:46.653 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.653 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.653 lat (usec) : 250=43.20%, 500=42.61%, 750=13.71% 00:13:46.653 lat (msec) : 50=0.48% 00:13:46.653 cpu : usr=1.60%, sys=2.50%, ctx=1875, majf=0, minf=1 00:13:46.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 issued rwts: total=851,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.653 job3: (groupid=0, jobs=1): err= 0: pid=1500636: Mon Jul 15 16:30:25 2024 00:13:46.653 read: IOPS=50, BW=202KiB/s (207kB/s)(204KiB/1009msec) 00:13:46.653 slat (nsec): min=7464, max=39016, avg=13012.53, stdev=8001.23 00:13:46.653 clat (usec): min=428, max=43968, avg=15687.04, stdev=19840.77 00:13:46.653 lat (usec): min=446, max=43988, avg=15700.05, stdev=19846.91 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 429], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 553], 00:13:46.653 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:13:46.653 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:46.653 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:46.653 | 99.99th=[43779] 00:13:46.653 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:46.653 slat (nsec): min=7032, max=72618, avg=27121.10, stdev=11890.79 00:13:46.653 clat (usec): min=183, max=575, avg=370.82, stdev=80.92 00:13:46.653 lat (usec): min=191, max=629, avg=397.94, stdev=85.09 00:13:46.653 clat percentiles (usec): 00:13:46.653 | 1.00th=[ 206], 5.00th=[ 245], 10.00th=[ 281], 20.00th=[ 297], 00:13:46.653 | 30.00th=[ 306], 40.00th=[ 334], 50.00th=[ 367], 60.00th=[ 392], 00:13:46.653 | 70.00th=[ 429], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 502], 00:13:46.653 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 578], 00:13:46.653 | 99.99th=[ 578] 00:13:46.653 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.653 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.653 lat (usec) : 250=4.97%, 500=81.17%, 750=10.48% 00:13:46.653 lat (msec) : 50=3.37% 00:13:46.653 cpu : usr=1.19%, sys=1.59%, ctx=563, majf=0, minf=1 00:13:46.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.653 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.653 00:13:46.653 Run status group 0 (all jobs): 00:13:46.653 READ: bw=3652KiB/s (3740kB/s), 83.7KiB/s-3401KiB/s (85.7kB/s-3482kB/s), io=3780KiB (3871kB), run=1001-1035msec 00:13:46.653 WRITE: bw=9894KiB/s (10.1MB/s), 1979KiB/s-4092KiB/s (2026kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1035msec 00:13:46.653 00:13:46.653 Disk stats (read/write): 00:13:46.653 nvme0n1: ios=67/512, merge=0/0, ticks=740/116, in_queue=856, util=86.77% 00:13:46.653 nvme0n2: ios=43/512, merge=0/0, ticks=753/116, in_queue=869, util=87.26% 00:13:46.653 nvme0n3: ios=512/825, merge=0/0, ticks=625/196, in_queue=821, util=88.75% 00:13:46.653 nvme0n4: ios=46/512, merge=0/0, ticks=636/147, in_queue=783, util=89.50% 00:13:46.653 16:30:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:46.653 [global] 00:13:46.653 thread=1 00:13:46.653 invalidate=1 00:13:46.653 rw=randwrite 00:13:46.653 time_based=1 00:13:46.653 runtime=1 00:13:46.653 ioengine=libaio 00:13:46.653 direct=1 00:13:46.653 bs=4096 00:13:46.653 iodepth=1 00:13:46.653 norandommap=0 00:13:46.653 numjobs=1 00:13:46.653 00:13:46.653 verify_dump=1 00:13:46.653 verify_backlog=512 00:13:46.653 verify_state_save=0 00:13:46.653 do_verify=1 00:13:46.653 verify=crc32c-intel 00:13:46.653 [job0] 00:13:46.653 filename=/dev/nvme0n1 00:13:46.653 [job1] 00:13:46.653 filename=/dev/nvme0n2 00:13:46.653 [job2] 00:13:46.653 filename=/dev/nvme0n3 00:13:46.653 [job3] 00:13:46.653 filename=/dev/nvme0n4 00:13:46.653 Could not set queue depth (nvme0n1) 00:13:46.653 Could not set queue depth (nvme0n2) 00:13:46.653 Could not set queue depth (nvme0n3) 00:13:46.653 Could not set queue depth (nvme0n4) 00:13:46.653 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.653 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.653 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.653 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:46.653 fio-3.35 00:13:46.653 Starting 4 threads 00:13:48.031 00:13:48.031 job0: (groupid=0, jobs=1): err= 0: pid=1500860: Mon Jul 15 16:30:27 2024 00:13:48.031 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:13:48.031 slat (nsec): min=8279, max=36298, avg=23182.09, stdev=9805.04 00:13:48.031 clat (usec): min=462, max=42440, avg=39628.21, stdev=8552.87 00:13:48.031 lat (usec): min=480, max=42460, avg=39651.39, stdev=8554.22 00:13:48.031 clat percentiles (usec): 00:13:48.031 | 1.00th=[ 461], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:48.031 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:48.031 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:48.031 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.031 | 99.99th=[42206] 00:13:48.031 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:48.031 slat (nsec): min=7448, max=40888, avg=12562.16, stdev=6225.31 00:13:48.031 clat (usec): min=185, max=392, avg=229.84, stdev=28.62 00:13:48.031 lat (usec): min=194, max=401, avg=242.41, stdev=31.04 00:13:48.031 clat percentiles (usec): 00:13:48.031 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:13:48.031 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 239], 00:13:48.031 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 273], 00:13:48.031 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 392], 99.95th=[ 392], 00:13:48.031 | 99.99th=[ 392] 00:13:48.031 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.031 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.031 lat (usec) : 250=80.37%, 500=15.51% 00:13:48.031 lat (msec) : 50=4.11% 00:13:48.031 cpu : usr=0.77%, sys=0.48%, ctx=535, majf=0, minf=2 00:13:48.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.031 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.031 job1: (groupid=0, jobs=1): err= 0: pid=1500861: Mon Jul 15 16:30:27 2024 00:13:48.031 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:13:48.031 slat (nsec): min=8782, max=32897, avg=21668.40, stdev=8539.04 00:13:48.031 clat (usec): min=26977, max=42040, avg=40595.01, stdev=3237.08 00:13:48.031 lat (usec): min=26992, max=42058, avg=40616.68, stdev=3238.59 00:13:48.031 clat percentiles (usec): 00:13:48.031 | 1.00th=[26870], 5.00th=[26870], 10.00th=[41157], 20.00th=[41157], 00:13:48.031 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:48.031 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:48.031 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.031 | 99.99th=[42206] 00:13:48.031 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:13:48.031 slat (nsec): min=6442, max=70815, avg=18413.82, stdev=10004.34 00:13:48.031 clat (usec): min=176, max=703, avg=364.73, stdev=142.25 00:13:48.032 lat (usec): min=188, max=718, avg=383.14, stdev=144.43 00:13:48.032 clat percentiles (usec): 00:13:48.032 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 208], 20.00th=[ 233], 00:13:48.032 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 347], 60.00th=[ 420], 00:13:48.032 | 70.00th=[ 465], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[ 611], 00:13:48.032 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 701], 99.95th=[ 701], 00:13:48.032 | 99.99th=[ 701] 00:13:48.032 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.032 lat (usec) : 250=38.35%, 500=37.22%, 750=20.68% 00:13:48.032 lat (msec) : 50=3.76% 00:13:48.032 cpu : usr=0.59%, sys=0.79%, ctx=532, majf=0, minf=1 00:13:48.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.032 job2: (groupid=0, jobs=1): err= 0: pid=1500862: Mon Jul 15 16:30:27 2024 00:13:48.032 read: IOPS=25, BW=103KiB/s (106kB/s)(104KiB/1008msec) 00:13:48.032 slat (nsec): min=7014, max=33679, avg=21415.35, stdev=9662.90 00:13:48.032 clat (usec): min=456, max=42064, avg=33665.45, stdev=16509.42 00:13:48.032 lat (usec): min=468, max=42083, avg=33686.87, stdev=16512.98 00:13:48.032 clat percentiles (usec): 00:13:48.032 | 1.00th=[ 457], 5.00th=[ 461], 10.00th=[ 486], 20.00th=[41157], 00:13:48.032 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:13:48.032 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:48.032 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.032 | 99.99th=[42206] 00:13:48.032 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:13:48.032 slat (nsec): min=6565, max=38802, avg=11930.09, stdev=6270.02 00:13:48.032 clat (usec): min=185, max=514, avg=243.02, stdev=56.32 00:13:48.032 lat (usec): min=192, max=524, avg=254.95, stdev=58.56 00:13:48.032 clat percentiles (usec): 00:13:48.032 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:13:48.032 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 241], 00:13:48.032 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 318], 95.00th=[ 388], 00:13:48.032 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 515], 99.95th=[ 515], 00:13:48.032 | 99.99th=[ 515] 00:13:48.032 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.032 lat (usec) : 250=71.75%, 500=23.79%, 750=0.56% 00:13:48.032 lat (msec) : 50=3.90% 00:13:48.032 cpu : usr=0.30%, sys=0.60%, ctx=539, majf=0, minf=1 00:13:48.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.032 job3: (groupid=0, jobs=1): err= 0: pid=1500863: Mon Jul 15 16:30:27 2024 00:13:48.032 read: IOPS=92, BW=370KiB/s (379kB/s)(376KiB/1017msec) 00:13:48.032 slat (nsec): min=6616, max=64481, avg=17807.69, stdev=9370.82 00:13:48.032 clat (usec): min=366, max=42029, avg=8704.41, stdev=16529.81 00:13:48.032 lat (usec): min=380, max=42048, avg=8722.22, stdev=16532.35 00:13:48.032 clat percentiles (usec): 00:13:48.032 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 396], 00:13:48.032 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 461], 00:13:48.032 | 70.00th=[ 502], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:48.032 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.032 | 99.99th=[42206] 00:13:48.032 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:13:48.032 slat (nsec): min=6311, max=70449, avg=17524.98, stdev=9551.24 00:13:48.032 clat (usec): min=184, max=706, avg=361.93, stdev=146.29 00:13:48.032 lat (usec): min=191, max=723, avg=379.45, stdev=149.15 00:13:48.032 clat percentiles (usec): 00:13:48.032 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 225], 00:13:48.032 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 285], 60.00th=[ 416], 00:13:48.032 | 70.00th=[ 465], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 619], 00:13:48.032 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 709], 00:13:48.032 | 99.99th=[ 709] 00:13:48.032 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.032 lat (usec) : 250=34.49%, 500=41.42%, 750=20.96% 00:13:48.032 lat (msec) : 50=3.14% 00:13:48.032 cpu : usr=0.39%, sys=1.08%, ctx=606, majf=0, minf=1 00:13:48.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.032 issued rwts: total=94,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.032 00:13:48.032 Run status group 0 (all jobs): 00:13:48.032 READ: bw=628KiB/s (643kB/s), 79.1KiB/s-370KiB/s (81.0kB/s-379kB/s), io=652KiB (668kB), run=1008-1038msec 00:13:48.032 WRITE: bw=7892KiB/s (8082kB/s), 1973KiB/s-2032KiB/s (2020kB/s-2081kB/s), io=8192KiB (8389kB), run=1008-1038msec 00:13:48.032 00:13:48.032 Disk stats (read/write): 00:13:48.032 nvme0n1: ios=68/512, merge=0/0, ticks=739/116, in_queue=855, util=87.37% 00:13:48.032 nvme0n2: ios=65/512, merge=0/0, ticks=747/178, in_queue=925, util=96.24% 00:13:48.032 nvme0n3: ios=69/512, merge=0/0, ticks=759/123, in_queue=882, util=90.82% 00:13:48.032 nvme0n4: ios=139/512, merge=0/0, ticks=754/177, in_queue=931, util=95.90% 00:13:48.032 16:30:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:48.032 [global] 00:13:48.032 thread=1 00:13:48.032 invalidate=1 00:13:48.032 rw=write 00:13:48.032 time_based=1 00:13:48.032 runtime=1 00:13:48.032 ioengine=libaio 00:13:48.032 direct=1 00:13:48.032 bs=4096 00:13:48.032 iodepth=128 00:13:48.032 norandommap=0 00:13:48.032 numjobs=1 00:13:48.032 00:13:48.032 verify_dump=1 00:13:48.032 verify_backlog=512 00:13:48.032 verify_state_save=0 00:13:48.032 do_verify=1 00:13:48.032 verify=crc32c-intel 00:13:48.032 [job0] 00:13:48.032 filename=/dev/nvme0n1 00:13:48.032 [job1] 00:13:48.032 filename=/dev/nvme0n2 00:13:48.032 [job2] 00:13:48.032 filename=/dev/nvme0n3 00:13:48.032 [job3] 00:13:48.032 filename=/dev/nvme0n4 00:13:48.032 Could not set queue depth (nvme0n1) 00:13:48.032 Could not set queue depth (nvme0n2) 00:13:48.032 Could not set queue depth (nvme0n3) 00:13:48.032 Could not set queue depth (nvme0n4) 00:13:48.291 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.291 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.291 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.291 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.291 fio-3.35 00:13:48.291 Starting 4 threads 00:13:49.670 00:13:49.670 job0: (groupid=0, jobs=1): err= 0: pid=1501094: Mon Jul 15 16:30:28 2024 00:13:49.670 read: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1011msec) 00:13:49.670 slat (usec): min=2, max=19056, avg=137.63, stdev=1003.72 00:13:49.670 clat (usec): min=2460, max=43212, avg=17405.18, stdev=7314.59 00:13:49.670 lat (usec): min=5006, max=43252, avg=17542.81, stdev=7399.22 00:13:49.670 clat percentiles (usec): 00:13:49.670 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:13:49.670 | 30.00th=[11076], 40.00th=[13304], 50.00th=[15270], 60.00th=[19792], 00:13:49.670 | 70.00th=[20579], 80.00th=[21365], 90.00th=[28967], 95.00th=[30540], 00:13:49.670 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:13:49.670 | 99.99th=[43254] 00:13:49.671 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:13:49.671 slat (usec): min=3, max=8911, avg=106.30, stdev=528.70 00:13:49.671 clat (usec): min=1421, max=40573, avg=14890.96, stdev=7326.16 00:13:49.671 lat (usec): min=1430, max=40581, avg=14997.26, stdev=7365.52 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7635], 00:13:49.671 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11731], 60.00th=[14353], 00:13:49.671 | 70.00th=[22676], 80.00th=[23200], 90.00th=[23725], 95.00th=[24773], 00:13:49.671 | 99.00th=[31851], 99.50th=[32900], 99.90th=[34866], 99.95th=[39584], 00:13:49.671 | 99.99th=[40633] 00:13:49.671 bw ( KiB/s): min=12288, max=20480, per=25.66%, avg=16384.00, stdev=5792.62, samples=2 00:13:49.671 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:13:49.671 lat (msec) : 2=0.04%, 4=0.23%, 10=17.95%, 20=45.20%, 50=36.59% 00:13:49.671 cpu : usr=5.94%, sys=7.62%, ctx=386, majf=0, minf=13 00:13:49.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:49.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.671 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.671 job1: (groupid=0, jobs=1): err= 0: pid=1501095: Mon Jul 15 16:30:28 2024 00:13:49.671 read: IOPS=3784, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1011msec) 00:13:49.671 slat (usec): min=3, max=18786, avg=135.09, stdev=935.93 00:13:49.671 clat (usec): min=3868, max=53125, avg=16716.61, stdev=8503.43 00:13:49.671 lat (usec): min=5841, max=53142, avg=16851.69, stdev=8568.14 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10683], 00:13:49.671 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13566], 60.00th=[16712], 00:13:49.671 | 70.00th=[19530], 80.00th=[20317], 90.00th=[25297], 95.00th=[38536], 00:13:49.671 | 99.00th=[47973], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:13:49.671 | 99.99th=[53216] 00:13:49.671 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:13:49.671 slat (usec): min=3, max=14862, avg=109.15, stdev=561.94 00:13:49.671 clat (usec): min=1183, max=53132, avg=15701.75, stdev=6577.04 00:13:49.671 lat (usec): min=1191, max=53151, avg=15810.90, stdev=6618.93 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 4228], 5.00th=[ 6849], 10.00th=[ 8029], 20.00th=[11076], 00:13:49.671 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12256], 60.00th=[16057], 00:13:49.671 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23725], 95.00th=[23987], 00:13:49.671 | 99.00th=[30802], 99.50th=[31589], 99.90th=[44827], 99.95th=[47973], 00:13:49.671 | 99.99th=[53216] 00:13:49.671 bw ( KiB/s): min=12288, max=20480, per=25.66%, avg=16384.00, stdev=5792.62, samples=2 00:13:49.671 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:13:49.671 lat (msec) : 2=0.06%, 4=0.23%, 10=13.09%, 20=55.04%, 50=31.29% 00:13:49.671 lat (msec) : 100=0.29% 00:13:49.671 cpu : usr=5.64%, sys=7.92%, ctx=480, majf=0, minf=9 00:13:49.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:49.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.671 issued rwts: total=3826,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.671 job2: (groupid=0, jobs=1): err= 0: pid=1501096: Mon Jul 15 16:30:28 2024 00:13:49.671 read: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(17.1MiB/1048msec) 00:13:49.671 slat (usec): min=3, max=8856, avg=106.46, stdev=574.42 00:13:49.671 clat (usec): min=7604, max=64601, avg=15238.03, stdev=8014.55 00:13:49.671 lat (usec): min=8683, max=64607, avg=15344.49, stdev=8041.82 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:13:49.671 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[13042], 00:13:49.671 | 70.00th=[13960], 80.00th=[16909], 90.00th=[20317], 95.00th=[26084], 00:13:49.671 | 99.00th=[57410], 99.50th=[60031], 99.90th=[64750], 99.95th=[64750], 00:13:49.671 | 99.99th=[64750] 00:13:49.671 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:13:49.671 slat (usec): min=4, max=9433, avg=105.61, stdev=609.92 00:13:49.671 clat (usec): min=6633, max=32091, avg=14253.61, stdev=5138.39 00:13:49.671 lat (usec): min=6860, max=32128, avg=14359.22, stdev=5175.54 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 7504], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11338], 00:13:49.671 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:13:49.671 | 70.00th=[12780], 80.00th=[19006], 90.00th=[23725], 95.00th=[25035], 00:13:49.671 | 99.00th=[30802], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:13:49.671 | 99.99th=[32113] 00:13:49.671 bw ( KiB/s): min=16384, max=20480, per=28.86%, avg=18432.00, stdev=2896.31, samples=2 00:13:49.671 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:49.671 lat (msec) : 10=3.64%, 20=84.01%, 50=11.18%, 100=1.17% 00:13:49.671 cpu : usr=6.11%, sys=10.60%, ctx=370, majf=0, minf=11 00:13:49.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:49.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.671 issued rwts: total=4372,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.671 job3: (groupid=0, jobs=1): err= 0: pid=1501101: Mon Jul 15 16:30:28 2024 00:13:49.671 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:49.671 slat (usec): min=2, max=27627, avg=128.79, stdev=866.01 00:13:49.671 clat (usec): min=8943, max=56160, avg=15713.43, stdev=6414.87 00:13:49.671 lat (usec): min=8971, max=56170, avg=15842.22, stdev=6496.92 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11731], 20.00th=[12780], 00:13:49.671 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:13:49.671 | 70.00th=[14746], 80.00th=[17171], 90.00th=[23462], 95.00th=[31327], 00:13:49.671 | 99.00th=[43779], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:13:49.671 | 99.99th=[56361] 00:13:49.671 write: IOPS=3911, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1005msec); 0 zone resets 00:13:49.671 slat (usec): min=3, max=9611, avg=126.90, stdev=626.69 00:13:49.671 clat (usec): min=275, max=63435, avg=17889.34, stdev=8615.86 00:13:49.671 lat (usec): min=6496, max=64652, avg=18016.24, stdev=8675.20 00:13:49.671 clat percentiles (usec): 00:13:49.671 | 1.00th=[ 8225], 5.00th=[10290], 10.00th=[12125], 20.00th=[13042], 00:13:49.671 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14484], 60.00th=[14746], 00:13:49.671 | 70.00th=[15926], 80.00th=[24511], 90.00th=[29230], 95.00th=[36963], 00:13:49.671 | 99.00th=[47449], 99.50th=[56361], 99.90th=[63177], 99.95th=[63177], 00:13:49.671 | 99.99th=[63177] 00:13:49.671 bw ( KiB/s): min=12288, max=18136, per=23.82%, avg=15212.00, stdev=4135.16, samples=2 00:13:49.671 iops : min= 3072, max= 4534, avg=3803.00, stdev=1033.79, samples=2 00:13:49.671 lat (usec) : 500=0.01% 00:13:49.671 lat (msec) : 10=3.53%, 20=77.76%, 50=18.06%, 100=0.64% 00:13:49.671 cpu : usr=4.88%, sys=9.86%, ctx=482, majf=0, minf=17 00:13:49.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:49.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.671 issued rwts: total=3584,3931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.671 00:13:49.671 Run status group 0 (all jobs): 00:13:49.671 READ: bw=58.2MiB/s (61.0MB/s), 13.9MiB/s-16.3MiB/s (14.6MB/s-17.1MB/s), io=61.0MiB (64.0MB), run=1005-1048msec 00:13:49.671 WRITE: bw=62.4MiB/s (65.4MB/s), 15.3MiB/s-17.2MiB/s (16.0MB/s-18.0MB/s), io=65.4MiB (68.5MB), run=1005-1048msec 00:13:49.671 00:13:49.671 Disk stats (read/write): 00:13:49.671 nvme0n1: ios=3443/3584, merge=0/0, ticks=55553/47238, in_queue=102791, util=85.57% 00:13:49.671 nvme0n2: ios=3373/3584, merge=0/0, ticks=50713/51989, in_queue=102702, util=91.36% 00:13:49.671 nvme0n3: ios=3648/3830, merge=0/0, ticks=25110/25558, in_queue=50668, util=95.31% 00:13:49.671 nvme0n4: ios=3002/3072, merge=0/0, ticks=22597/22340, in_queue=44937, util=94.01% 00:13:49.671 16:30:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:49.671 [global] 00:13:49.671 thread=1 00:13:49.671 invalidate=1 00:13:49.671 rw=randwrite 00:13:49.671 time_based=1 00:13:49.671 runtime=1 00:13:49.671 ioengine=libaio 00:13:49.671 direct=1 00:13:49.671 bs=4096 00:13:49.671 iodepth=128 00:13:49.671 norandommap=0 00:13:49.671 numjobs=1 00:13:49.671 00:13:49.671 verify_dump=1 00:13:49.671 verify_backlog=512 00:13:49.671 verify_state_save=0 00:13:49.671 do_verify=1 00:13:49.671 verify=crc32c-intel 00:13:49.671 [job0] 00:13:49.671 filename=/dev/nvme0n1 00:13:49.671 [job1] 00:13:49.671 filename=/dev/nvme0n2 00:13:49.671 [job2] 00:13:49.671 filename=/dev/nvme0n3 00:13:49.671 [job3] 00:13:49.671 filename=/dev/nvme0n4 00:13:49.671 Could not set queue depth (nvme0n1) 00:13:49.671 Could not set queue depth (nvme0n2) 00:13:49.671 Could not set queue depth (nvme0n3) 00:13:49.671 Could not set queue depth (nvme0n4) 00:13:49.671 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.671 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.671 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.671 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.671 fio-3.35 00:13:49.671 Starting 4 threads 00:13:51.049 00:13:51.049 job0: (groupid=0, jobs=1): err= 0: pid=1501444: Mon Jul 15 16:30:30 2024 00:13:51.049 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:13:51.049 slat (usec): min=3, max=24663, avg=109.88, stdev=774.98 00:13:51.049 clat (usec): min=7699, max=45299, avg=14560.13, stdev=6488.85 00:13:51.049 lat (usec): min=7726, max=45314, avg=14670.01, stdev=6537.36 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:13:51.049 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[13304], 00:13:51.049 | 70.00th=[14484], 80.00th=[16188], 90.00th=[22938], 95.00th=[30278], 00:13:51.049 | 99.00th=[42730], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:13:51.049 | 99.99th=[45351] 00:13:51.049 write: IOPS=4328, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec); 0 zone resets 00:13:51.049 slat (usec): min=3, max=18798, avg=116.70, stdev=667.55 00:13:51.049 clat (usec): min=5802, max=38182, avg=15533.92, stdev=4763.81 00:13:51.049 lat (usec): min=5814, max=38231, avg=15650.62, stdev=4811.85 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11863], 00:13:51.049 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13960], 60.00th=[15533], 00:13:51.049 | 70.00th=[17957], 80.00th=[18744], 90.00th=[22414], 95.00th=[25822], 00:13:51.049 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30016], 00:13:51.049 | 99.99th=[38011] 00:13:51.049 bw ( KiB/s): min=16888, max=16968, per=24.93%, avg=16928.00, stdev=56.57, samples=2 00:13:51.049 iops : min= 4222, max= 4242, avg=4232.00, stdev=14.14, samples=2 00:13:51.049 lat (msec) : 10=7.11%, 20=79.78%, 50=13.12% 00:13:51.049 cpu : usr=4.97%, sys=8.55%, ctx=458, majf=0, minf=1 00:13:51.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:51.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.049 issued rwts: total=4096,4359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.049 job1: (groupid=0, jobs=1): err= 0: pid=1501445: Mon Jul 15 16:30:30 2024 00:13:51.049 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:51.049 slat (usec): min=2, max=19028, avg=124.94, stdev=896.51 00:13:51.049 clat (usec): min=7983, max=59436, avg=16739.65, stdev=8172.04 00:13:51.049 lat (usec): min=8002, max=59463, avg=16864.59, stdev=8242.09 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10945], 00:13:51.049 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13698], 60.00th=[15533], 00:13:51.049 | 70.00th=[17957], 80.00th=[21627], 90.00th=[26870], 95.00th=[31065], 00:13:51.049 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[57934], 00:13:51.049 | 99.99th=[59507] 00:13:51.049 write: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec); 0 zone resets 00:13:51.049 slat (usec): min=3, max=21476, avg=137.46, stdev=957.49 00:13:51.049 clat (usec): min=3631, max=80743, avg=18026.28, stdev=12716.91 00:13:51.049 lat (usec): min=4818, max=80763, avg=18163.74, stdev=12794.07 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 7242], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[10814], 00:13:51.049 | 30.00th=[11338], 40.00th=[11863], 50.00th=[13042], 60.00th=[14615], 00:13:51.049 | 70.00th=[16057], 80.00th=[22676], 90.00th=[33817], 95.00th=[44303], 00:13:51.049 | 99.00th=[74974], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:13:51.049 | 99.99th=[81265] 00:13:51.049 bw ( KiB/s): min=10944, max=17784, per=21.16%, avg=14364.00, stdev=4836.61, samples=2 00:13:51.049 iops : min= 2736, max= 4446, avg=3591.00, stdev=1209.15, samples=2 00:13:51.049 lat (msec) : 4=0.01%, 10=8.44%, 20=67.58%, 50=22.29%, 100=1.67% 00:13:51.049 cpu : usr=4.28%, sys=9.26%, ctx=260, majf=0, minf=1 00:13:51.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:51.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.049 issued rwts: total=3584,3714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.049 job2: (groupid=0, jobs=1): err= 0: pid=1501447: Mon Jul 15 16:30:30 2024 00:13:51.049 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1005msec) 00:13:51.049 slat (usec): min=2, max=32556, avg=133.77, stdev=1186.10 00:13:51.049 clat (usec): min=1206, max=69707, avg=18563.26, stdev=10395.50 00:13:51.049 lat (usec): min=1222, max=69746, avg=18697.02, stdev=10458.96 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 3458], 5.00th=[ 6325], 10.00th=[ 9503], 20.00th=[12256], 00:13:51.049 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14877], 60.00th=[16909], 00:13:51.049 | 70.00th=[19530], 80.00th=[25560], 90.00th=[32113], 95.00th=[39584], 00:13:51.049 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:13:51.049 | 99.99th=[69731] 00:13:51.049 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:13:51.049 slat (usec): min=3, max=13917, avg=105.31, stdev=744.23 00:13:51.049 clat (usec): min=721, max=50334, avg=14036.51, stdev=5660.83 00:13:51.049 lat (usec): min=737, max=50368, avg=14141.82, stdev=5687.32 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 2507], 5.00th=[ 6063], 10.00th=[ 7504], 20.00th=[10028], 00:13:51.049 | 30.00th=[11731], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:13:51.049 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19530], 95.00th=[23200], 00:13:51.049 | 99.00th=[38011], 99.50th=[39584], 99.90th=[42730], 99.95th=[46400], 00:13:51.049 | 99.99th=[50594] 00:13:51.049 bw ( KiB/s): min=16376, max=16392, per=24.13%, avg=16384.00, stdev=11.31, samples=2 00:13:51.049 iops : min= 4094, max= 4098, avg=4096.00, stdev= 2.83, samples=2 00:13:51.049 lat (usec) : 750=0.03%, 1000=0.03% 00:13:51.049 lat (msec) : 2=0.11%, 4=1.54%, 10=14.53%, 20=64.70%, 50=18.49% 00:13:51.049 lat (msec) : 100=0.57% 00:13:51.049 cpu : usr=3.49%, sys=6.27%, ctx=270, majf=0, minf=1 00:13:51.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:51.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.049 issued rwts: total=3762,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.049 job3: (groupid=0, jobs=1): err= 0: pid=1501448: Mon Jul 15 16:30:30 2024 00:13:51.049 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:13:51.049 slat (usec): min=2, max=28451, avg=105.36, stdev=822.67 00:13:51.049 clat (usec): min=2083, max=43756, avg=14174.57, stdev=5120.57 00:13:51.049 lat (usec): min=2087, max=43813, avg=14279.93, stdev=5154.54 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 5211], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11469], 00:13:51.049 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13435], 00:13:51.049 | 70.00th=[14877], 80.00th=[16319], 90.00th=[18744], 95.00th=[22414], 00:13:51.049 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:13:51.049 | 99.99th=[43779] 00:13:51.049 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1005msec); 0 zone resets 00:13:51.049 slat (usec): min=3, max=8804, avg=80.10, stdev=407.00 00:13:51.049 clat (usec): min=790, max=58269, avg=12570.88, stdev=5596.49 00:13:51.049 lat (usec): min=804, max=58277, avg=12650.98, stdev=5611.31 00:13:51.049 clat percentiles (usec): 00:13:51.049 | 1.00th=[ 3425], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 9372], 00:13:51.049 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12125], 60.00th=[12518], 00:13:51.049 | 70.00th=[13435], 80.00th=[14484], 90.00th=[17433], 95.00th=[20317], 00:13:51.049 | 99.00th=[40633], 99.50th=[49546], 99.90th=[58459], 99.95th=[58459], 00:13:51.049 | 99.99th=[58459] 00:13:51.049 bw ( KiB/s): min=17960, max=20400, per=28.25%, avg=19180.00, stdev=1725.34, samples=2 00:13:51.049 iops : min= 4490, max= 5100, avg=4795.00, stdev=431.34, samples=2 00:13:51.049 lat (usec) : 1000=0.06% 00:13:51.049 lat (msec) : 2=0.19%, 4=0.69%, 10=15.04%, 20=77.37%, 50=6.40% 00:13:51.049 lat (msec) : 100=0.25% 00:13:51.049 cpu : usr=4.58%, sys=8.37%, ctx=543, majf=0, minf=1 00:13:51.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:51.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.049 issued rwts: total=4608,4922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.049 00:13:51.049 Run status group 0 (all jobs): 00:13:51.049 READ: bw=62.3MiB/s (65.3MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=62.7MiB (65.7MB), run=1005-1007msec 00:13:51.049 WRITE: bw=66.3MiB/s (69.5MB/s), 14.4MiB/s-19.1MiB/s (15.1MB/s-20.1MB/s), io=66.8MiB (70.0MB), run=1005-1007msec 00:13:51.049 00:13:51.049 Disk stats (read/write): 00:13:51.049 nvme0n1: ios=3635/3759, merge=0/0, ticks=21115/24025, in_queue=45140, util=88.98% 00:13:51.049 nvme0n2: ios=3106/3336, merge=0/0, ticks=21494/27138, in_queue=48632, util=97.36% 00:13:51.049 nvme0n3: ios=3215/3584, merge=0/0, ticks=34510/30839, in_queue=65349, util=94.76% 00:13:51.049 nvme0n4: ios=3869/4096, merge=0/0, ticks=48517/46903, in_queue=95420, util=94.29% 00:13:51.049 16:30:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:51.050 16:30:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1501585 00:13:51.050 16:30:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:51.050 16:30:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:51.050 [global] 00:13:51.050 thread=1 00:13:51.050 invalidate=1 00:13:51.050 rw=read 00:13:51.050 time_based=1 00:13:51.050 runtime=10 00:13:51.050 ioengine=libaio 00:13:51.050 direct=1 00:13:51.050 bs=4096 00:13:51.050 iodepth=1 00:13:51.050 norandommap=1 00:13:51.050 numjobs=1 00:13:51.050 00:13:51.050 [job0] 00:13:51.050 filename=/dev/nvme0n1 00:13:51.050 [job1] 00:13:51.050 filename=/dev/nvme0n2 00:13:51.050 [job2] 00:13:51.050 filename=/dev/nvme0n3 00:13:51.050 [job3] 00:13:51.050 filename=/dev/nvme0n4 00:13:51.050 Could not set queue depth (nvme0n1) 00:13:51.050 Could not set queue depth (nvme0n2) 00:13:51.050 Could not set queue depth (nvme0n3) 00:13:51.050 Could not set queue depth (nvme0n4) 00:13:51.050 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.050 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.050 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.050 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:51.050 fio-3.35 00:13:51.050 Starting 4 threads 00:13:54.338 16:30:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:54.338 16:30:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:54.338 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9338880, buflen=4096 00:13:54.338 fio: pid=1501676, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:54.338 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6852608, buflen=4096 00:13:54.338 fio: pid=1501675, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:54.338 16:30:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.338 16:30:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:54.595 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=30978048, buflen=4096 00:13:54.595 fio: pid=1501673, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:54.595 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.595 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:54.853 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.853 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:54.853 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11350016, buflen=4096 00:13:54.853 fio: pid=1501674, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:54.853 00:13:54.853 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1501673: Mon Jul 15 16:30:34 2024 00:13:54.853 read: IOPS=2184, BW=8738KiB/s (8948kB/s)(29.5MiB/3462msec) 00:13:54.853 slat (usec): min=5, max=29810, avg=21.20, stdev=414.08 00:13:54.853 clat (usec): min=330, max=41110, avg=430.10, stdev=809.75 00:13:54.853 lat (usec): min=336, max=41123, avg=451.30, stdev=909.60 00:13:54.853 clat percentiles (usec): 00:13:54.853 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 379], 00:13:54.853 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:13:54.853 | 70.00th=[ 420], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 515], 00:13:54.853 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 685], 99.95th=[ 775], 00:13:54.853 | 99.99th=[41157] 00:13:54.853 bw ( KiB/s): min= 6856, max=10032, per=57.25%, avg=8760.00, stdev=1224.57, samples=6 00:13:54.853 iops : min= 1714, max= 2508, avg=2190.00, stdev=306.14, samples=6 00:13:54.853 lat (usec) : 500=92.09%, 750=7.84%, 1000=0.01% 00:13:54.853 lat (msec) : 50=0.04% 00:13:54.853 cpu : usr=2.17%, sys=4.05%, ctx=7571, majf=0, minf=1 00:13:54.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 issued rwts: total=7564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.853 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1501674: Mon Jul 15 16:30:34 2024 00:13:54.853 read: IOPS=742, BW=2968KiB/s (3039kB/s)(10.8MiB/3735msec) 00:13:54.853 slat (usec): min=4, max=15658, avg=40.28, stdev=633.46 00:13:54.853 clat (usec): min=298, max=44984, avg=1305.13, stdev=6122.21 00:13:54.853 lat (usec): min=309, max=45003, avg=1342.83, stdev=6151.60 00:13:54.853 clat percentiles (usec): 00:13:54.853 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:13:54.853 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 367], 00:13:54.853 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 498], 00:13:54.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:54.853 | 99.99th=[44827] 00:13:54.853 bw ( KiB/s): min= 96, max= 9808, per=19.08%, avg=2919.14, stdev=4336.91, samples=7 00:13:54.853 iops : min= 24, max= 2452, avg=729.71, stdev=1084.12, samples=7 00:13:54.853 lat (usec) : 500=95.13%, 750=2.24%, 1000=0.11% 00:13:54.853 lat (msec) : 2=0.11%, 4=0.04%, 20=0.04%, 50=2.31% 00:13:54.853 cpu : usr=0.32%, sys=1.21%, ctx=2779, majf=0, minf=1 00:13:54.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.853 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1501675: Mon Jul 15 16:30:34 2024 00:13:54.853 read: IOPS=526, BW=2105KiB/s (2156kB/s)(6692KiB/3179msec) 00:13:54.853 slat (usec): min=4, max=15616, avg=28.55, stdev=453.61 00:13:54.853 clat (usec): min=278, max=42050, avg=1861.64, stdev=7677.83 00:13:54.853 lat (usec): min=285, max=42064, avg=1890.19, stdev=7690.53 00:13:54.853 clat percentiles (usec): 00:13:54.853 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 330], 00:13:54.853 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:13:54.853 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 465], 00:13:54.853 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:54.853 | 99.99th=[42206] 00:13:54.853 bw ( KiB/s): min= 88, max= 5464, per=10.93%, avg=1673.33, stdev=2412.87, samples=6 00:13:54.853 iops : min= 22, max= 1366, avg=418.33, stdev=603.22, samples=6 00:13:54.853 lat (usec) : 500=95.52%, 750=0.36%, 1000=0.12% 00:13:54.853 lat (msec) : 2=0.18%, 10=0.06%, 20=0.06%, 50=3.64% 00:13:54.853 cpu : usr=0.25%, sys=0.98%, ctx=1678, majf=0, minf=1 00:13:54.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.853 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1501676: Mon Jul 15 16:30:34 2024 00:13:54.853 read: IOPS=780, BW=3122KiB/s (3197kB/s)(9120KiB/2921msec) 00:13:54.853 slat (nsec): min=4955, max=66988, avg=24145.17, stdev=10941.16 00:13:54.853 clat (usec): min=281, max=41982, avg=1241.28, stdev=5529.74 00:13:54.853 lat (usec): min=288, max=41996, avg=1265.43, stdev=5529.60 00:13:54.853 clat percentiles (usec): 00:13:54.853 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 367], 20.00th=[ 412], 00:13:54.853 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 469], 60.00th=[ 482], 00:13:54.853 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 627], 00:13:54.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:54.853 | 99.99th=[42206] 00:13:54.853 bw ( KiB/s): min= 96, max= 8176, per=23.59%, avg=3609.60, stdev=4075.93, samples=5 00:13:54.853 iops : min= 24, max= 2044, avg=902.40, stdev=1018.98, samples=5 00:13:54.853 lat (usec) : 500=69.93%, 750=27.66%, 1000=0.31% 00:13:54.853 lat (msec) : 2=0.04%, 10=0.09%, 20=0.04%, 50=1.89% 00:13:54.853 cpu : usr=0.82%, sys=2.16%, ctx=2281, majf=0, minf=1 00:13:54.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.853 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.853 00:13:54.853 Run status group 0 (all jobs): 00:13:54.853 READ: bw=14.9MiB/s (15.7MB/s), 2105KiB/s-8738KiB/s (2156kB/s-8948kB/s), io=55.8MiB (58.5MB), run=2921-3735msec 00:13:54.853 00:13:54.853 Disk stats (read/write): 00:13:54.853 nvme0n1: ios=7438/0, merge=0/0, ticks=4272/0, in_queue=4272, util=98.03% 00:13:54.853 nvme0n2: ios=2767/0, merge=0/0, ticks=3427/0, in_queue=3427, util=94.59% 00:13:54.853 nvme0n3: ios=1556/0, merge=0/0, ticks=3203/0, in_queue=3203, util=99.34% 00:13:54.853 nvme0n4: ios=2278/0, merge=0/0, ticks=2709/0, in_queue=2709, util=96.75% 00:13:55.111 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.111 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:55.369 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.369 16:30:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:55.627 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.627 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:55.884 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:55.884 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:56.142 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:56.142 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1501585 00:13:56.142 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:56.142 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:56.399 nvmf hotplug test: fio failed as expected 00:13:56.399 16:30:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.655 16:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.656 rmmod nvme_tcp 00:13:56.656 rmmod nvme_fabrics 00:13:56.656 rmmod nvme_keyring 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1499553 ']' 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1499553 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1499553 ']' 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1499553 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1499553 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1499553' 00:13:56.656 killing process with pid 1499553 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1499553 00:13:56.656 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1499553 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.913 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.914 16:30:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.448 16:30:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.448 00:13:59.448 real 0m23.435s 00:13:59.448 user 1m21.285s 00:13:59.448 sys 0m6.605s 00:13:59.448 16:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.448 16:30:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.448 ************************************ 00:13:59.448 END TEST nvmf_fio_target 00:13:59.448 ************************************ 00:13:59.448 16:30:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:59.448 16:30:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:59.448 16:30:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.448 16:30:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.448 16:30:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.448 ************************************ 00:13:59.448 START TEST nvmf_bdevio 00:13:59.448 ************************************ 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:59.448 * Looking for test storage... 00:13:59.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.448 16:30:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.401 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:01.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:01.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:01.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:01.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:01.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:14:01.402 00:14:01.402 --- 10.0.0.2 ping statistics --- 00:14:01.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.402 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:14:01.402 00:14:01.402 --- 10.0.0.1 ping statistics --- 00:14:01.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.402 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1504297 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:01.402 16:30:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1504297 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1504297 ']' 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.403 16:30:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.403 [2024-07-15 16:30:40.799067] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:14:01.403 [2024-07-15 16:30:40.799145] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.403 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.403 [2024-07-15 16:30:40.867738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.403 [2024-07-15 16:30:40.976664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.403 [2024-07-15 16:30:40.976714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.403 [2024-07-15 16:30:40.976727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.403 [2024-07-15 16:30:40.976745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.403 [2024-07-15 16:30:40.976754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.403 [2024-07-15 16:30:40.976889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:01.403 [2024-07-15 16:30:40.976950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:01.403 [2024-07-15 16:30:40.976984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:01.403 [2024-07-15 16:30:40.976986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 [2024-07-15 16:30:41.125583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 Malloc0 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 [2024-07-15 16:30:41.176555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:01.670 { 00:14:01.670 "params": { 00:14:01.670 "name": "Nvme$subsystem", 00:14:01.670 "trtype": "$TEST_TRANSPORT", 00:14:01.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:01.670 "adrfam": "ipv4", 00:14:01.670 "trsvcid": "$NVMF_PORT", 00:14:01.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:01.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:01.670 "hdgst": ${hdgst:-false}, 00:14:01.670 "ddgst": ${ddgst:-false} 00:14:01.670 }, 00:14:01.670 "method": "bdev_nvme_attach_controller" 00:14:01.670 } 00:14:01.670 EOF 00:14:01.670 )") 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:01.670 16:30:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:01.670 "params": { 00:14:01.670 "name": "Nvme1", 00:14:01.670 "trtype": "tcp", 00:14:01.670 "traddr": "10.0.0.2", 00:14:01.670 "adrfam": "ipv4", 00:14:01.670 "trsvcid": "4420", 00:14:01.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.670 "hdgst": false, 00:14:01.670 "ddgst": false 00:14:01.670 }, 00:14:01.670 "method": "bdev_nvme_attach_controller" 00:14:01.670 }' 00:14:01.670 [2024-07-15 16:30:41.219973] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:14:01.670 [2024-07-15 16:30:41.220050] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504327 ] 00:14:01.670 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.931 [2024-07-15 16:30:41.280406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.931 [2024-07-15 16:30:41.394050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.931 [2024-07-15 16:30:41.394097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.931 [2024-07-15 16:30:41.394101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.191 I/O targets: 00:14:02.191 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:02.191 00:14:02.191 00:14:02.191 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.191 http://cunit.sourceforge.net/ 00:14:02.191 00:14:02.191 00:14:02.191 Suite: bdevio tests on: Nvme1n1 00:14:02.191 Test: blockdev write read block ...passed 00:14:02.191 Test: blockdev write zeroes read block ...passed 00:14:02.191 Test: blockdev write zeroes read no split ...passed 00:14:02.191 Test: blockdev write zeroes read split ...passed 00:14:02.451 Test: blockdev write zeroes read split partial ...passed 00:14:02.451 Test: blockdev reset ...[2024-07-15 16:30:41.813798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:02.451 [2024-07-15 16:30:41.813910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a1580 (9): Bad file descriptor 00:14:02.451 [2024-07-15 16:30:41.948929] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:02.451 passed 00:14:02.451 Test: blockdev write read 8 blocks ...passed 00:14:02.451 Test: blockdev write read size > 128k ...passed 00:14:02.451 Test: blockdev write read invalid size ...passed 00:14:02.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.451 Test: blockdev write read max offset ...passed 00:14:02.710 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:02.710 Test: blockdev writev readv 8 blocks ...passed 00:14:02.710 Test: blockdev writev readv 30 x 1block ...passed 00:14:02.710 Test: blockdev writev readv block ...passed 00:14:02.710 Test: blockdev writev readv size > 128k ...passed 00:14:02.710 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:02.710 Test: blockdev comparev and writev ...[2024-07-15 16:30:42.125458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.125494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.125517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.125535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.125939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.125985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.126010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.126392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.126418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.126440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.126457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.126830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.126855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.126887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:02.710 [2024-07-15 16:30:42.126907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:02.710 passed 00:14:02.710 Test: blockdev nvme passthru rw ...passed 00:14:02.710 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:30:42.210260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.710 [2024-07-15 16:30:42.210288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.210481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.710 [2024-07-15 16:30:42.210505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.210700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.710 [2024-07-15 16:30:42.210725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:02.710 [2024-07-15 16:30:42.210927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.710 [2024-07-15 16:30:42.210952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:02.710 passed 00:14:02.710 Test: blockdev nvme admin passthru ...passed 00:14:02.710 Test: blockdev copy ...passed 00:14:02.710 00:14:02.710 Run Summary: Type Total Ran Passed Failed Inactive 00:14:02.710 suites 1 1 n/a 0 0 00:14:02.710 tests 23 23 23 0 0 00:14:02.710 asserts 152 152 152 0 n/a 00:14:02.710 00:14:02.710 Elapsed time = 1.304 seconds 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.968 rmmod nvme_tcp 00:14:02.968 rmmod nvme_fabrics 00:14:02.968 rmmod nvme_keyring 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1504297 ']' 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1504297 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1504297 ']' 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1504297 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.968 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1504297 00:14:03.228 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:03.228 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:03.228 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1504297' 00:14:03.228 killing process with pid 1504297 00:14:03.228 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1504297 00:14:03.228 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1504297 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.488 16:30:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.393 16:30:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.393 00:14:05.393 real 0m6.355s 00:14:05.393 user 0m10.473s 00:14:05.393 sys 0m1.997s 00:14:05.393 16:30:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.393 16:30:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 ************************************ 00:14:05.393 END TEST nvmf_bdevio 00:14:05.393 ************************************ 00:14:05.393 16:30:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:05.393 16:30:44 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:05.393 16:30:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:05.393 16:30:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.393 16:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 ************************************ 00:14:05.393 START TEST nvmf_auth_target 00:14:05.393 ************************************ 00:14:05.393 16:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:05.651 * Looking for test storage... 00:14:05.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.651 16:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:07.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:07.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:07.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:07.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:14:07.557 00:14:07.557 --- 10.0.0.2 ping statistics --- 00:14:07.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.557 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:14:07.557 00:14:07.557 --- 10.0.0.1 ping statistics --- 00:14:07.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.557 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.557 16:30:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1506438 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1506438 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1506438 ']' 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.557 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1506537 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b4ded36d8a1343caa44ac5ed655a1b9176ff17dba1e3dffa 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V1s 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b4ded36d8a1343caa44ac5ed655a1b9176ff17dba1e3dffa 0 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b4ded36d8a1343caa44ac5ed655a1b9176ff17dba1e3dffa 0 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.815 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b4ded36d8a1343caa44ac5ed655a1b9176ff17dba1e3dffa 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V1s 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V1s 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.V1s 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3e7d2a691ace6236f8d6d24e6d42bcf99c08379426236b656141695d6a3ef130 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8uM 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3e7d2a691ace6236f8d6d24e6d42bcf99c08379426236b656141695d6a3ef130 3 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3e7d2a691ace6236f8d6d24e6d42bcf99c08379426236b656141695d6a3ef130 3 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3e7d2a691ace6236f8d6d24e6d42bcf99c08379426236b656141695d6a3ef130 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8uM 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8uM 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8uM 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=059bbf449416cb6e63cfb1c326252f4f 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.R5F 00:14:08.074 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 059bbf449416cb6e63cfb1c326252f4f 1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 059bbf449416cb6e63cfb1c326252f4f 1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=059bbf449416cb6e63cfb1c326252f4f 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.R5F 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.R5F 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.R5F 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0c5329ca9521198715ce7689916dbe35ee94ee5aac2d8acd 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DDJ 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0c5329ca9521198715ce7689916dbe35ee94ee5aac2d8acd 2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0c5329ca9521198715ce7689916dbe35ee94ee5aac2d8acd 2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0c5329ca9521198715ce7689916dbe35ee94ee5aac2d8acd 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DDJ 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DDJ 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.DDJ 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47f5f20b7675052fd407023eaeef1334a885676d73234a3e 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qwe 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47f5f20b7675052fd407023eaeef1334a885676d73234a3e 2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47f5f20b7675052fd407023eaeef1334a885676d73234a3e 2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47f5f20b7675052fd407023eaeef1334a885676d73234a3e 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:08.075 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qwe 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qwe 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Qwe 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d041a591ca8a69f4bb4d17849598c415 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.z14 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d041a591ca8a69f4bb4d17849598c415 1 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d041a591ca8a69f4bb4d17849598c415 1 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d041a591ca8a69f4bb4d17849598c415 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.z14 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.z14 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.z14 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c3767739d21395065e27796d54e012ac395ca1ea63a620b6caa73438849a4d7c 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.U43 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c3767739d21395065e27796d54e012ac395ca1ea63a620b6caa73438849a4d7c 3 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c3767739d21395065e27796d54e012ac395ca1ea63a620b6caa73438849a4d7c 3 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c3767739d21395065e27796d54e012ac395ca1ea63a620b6caa73438849a4d7c 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.U43 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.U43 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.U43 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1506438 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1506438 ']' 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.333 16:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1506537 /var/tmp/host.sock 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1506537 ']' 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:08.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.591 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V1s 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.V1s 00:14:08.849 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.V1s 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8uM ]] 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8uM 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8uM 00:14:09.106 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8uM 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R5F 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.R5F 00:14:09.365 16:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.R5F 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.DDJ ]] 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DDJ 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DDJ 00:14:09.623 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DDJ 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Qwe 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Qwe 00:14:09.881 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Qwe 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.z14 ]] 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z14 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z14 00:14:10.139 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z14 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.U43 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.U43 00:14:10.397 16:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.U43 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:10.655 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.913 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.172 00:14:11.432 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.432 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.432 16:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.432 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.432 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.432 16:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.432 16:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.690 { 00:14:11.690 "cntlid": 1, 00:14:11.690 "qid": 0, 00:14:11.690 "state": "enabled", 00:14:11.690 "thread": "nvmf_tgt_poll_group_000", 00:14:11.690 "listen_address": { 00:14:11.690 "trtype": "TCP", 00:14:11.690 "adrfam": "IPv4", 00:14:11.690 "traddr": "10.0.0.2", 00:14:11.690 "trsvcid": "4420" 00:14:11.690 }, 00:14:11.690 "peer_address": { 00:14:11.690 "trtype": "TCP", 00:14:11.690 "adrfam": "IPv4", 00:14:11.690 "traddr": "10.0.0.1", 00:14:11.690 "trsvcid": "53968" 00:14:11.690 }, 00:14:11.690 "auth": { 00:14:11.690 "state": "completed", 00:14:11.690 "digest": "sha256", 00:14:11.690 "dhgroup": "null" 00:14:11.690 } 00:14:11.690 } 00:14:11.690 ]' 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.690 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.948 16:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:12.882 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.883 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.141 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.399 00:14:13.399 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.399 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.399 16:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.657 { 00:14:13.657 "cntlid": 3, 00:14:13.657 "qid": 0, 00:14:13.657 "state": "enabled", 00:14:13.657 "thread": "nvmf_tgt_poll_group_000", 00:14:13.657 "listen_address": { 00:14:13.657 "trtype": "TCP", 00:14:13.657 "adrfam": "IPv4", 00:14:13.657 "traddr": "10.0.0.2", 00:14:13.657 "trsvcid": "4420" 00:14:13.657 }, 00:14:13.657 "peer_address": { 00:14:13.657 "trtype": "TCP", 00:14:13.657 "adrfam": "IPv4", 00:14:13.657 "traddr": "10.0.0.1", 00:14:13.657 "trsvcid": "42906" 00:14:13.657 }, 00:14:13.657 "auth": { 00:14:13.657 "state": "completed", 00:14:13.657 "digest": "sha256", 00:14:13.657 "dhgroup": "null" 00:14:13.657 } 00:14:13.657 } 00:14:13.657 ]' 00:14:13.657 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.913 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.169 16:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.144 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.417 16:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.675 00:14:15.675 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.675 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.675 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.932 { 00:14:15.932 "cntlid": 5, 00:14:15.932 "qid": 0, 00:14:15.932 "state": "enabled", 00:14:15.932 "thread": "nvmf_tgt_poll_group_000", 00:14:15.932 "listen_address": { 00:14:15.932 "trtype": "TCP", 00:14:15.932 "adrfam": "IPv4", 00:14:15.932 "traddr": "10.0.0.2", 00:14:15.932 "trsvcid": "4420" 00:14:15.932 }, 00:14:15.932 "peer_address": { 00:14:15.932 "trtype": "TCP", 00:14:15.932 "adrfam": "IPv4", 00:14:15.932 "traddr": "10.0.0.1", 00:14:15.932 "trsvcid": "42932" 00:14:15.932 }, 00:14:15.932 "auth": { 00:14:15.932 "state": "completed", 00:14:15.932 "digest": "sha256", 00:14:15.932 "dhgroup": "null" 00:14:15.932 } 00:14:15.932 } 00:14:15.932 ]' 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.932 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.190 16:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:14:17.123 16:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.123 16:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.123 16:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.123 16:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.382 16:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.382 16:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.382 16:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.382 16:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.639 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.897 00:14:17.897 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.897 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.897 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.155 { 00:14:18.155 "cntlid": 7, 00:14:18.155 "qid": 0, 00:14:18.155 "state": "enabled", 00:14:18.155 "thread": "nvmf_tgt_poll_group_000", 00:14:18.155 "listen_address": { 00:14:18.155 "trtype": "TCP", 00:14:18.155 "adrfam": "IPv4", 00:14:18.155 "traddr": "10.0.0.2", 00:14:18.155 "trsvcid": "4420" 00:14:18.155 }, 00:14:18.155 "peer_address": { 00:14:18.155 "trtype": "TCP", 00:14:18.155 "adrfam": "IPv4", 00:14:18.155 "traddr": "10.0.0.1", 00:14:18.155 "trsvcid": "42958" 00:14:18.155 }, 00:14:18.155 "auth": { 00:14:18.155 "state": "completed", 00:14:18.155 "digest": "sha256", 00:14:18.155 "dhgroup": "null" 00:14:18.155 } 00:14:18.155 } 00:14:18.155 ]' 00:14:18.155 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.156 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.414 16:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.790 16:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.790 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.048 00:14:20.048 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.048 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.048 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.306 { 00:14:20.306 "cntlid": 9, 00:14:20.306 "qid": 0, 00:14:20.306 "state": "enabled", 00:14:20.306 "thread": "nvmf_tgt_poll_group_000", 00:14:20.306 "listen_address": { 00:14:20.306 "trtype": "TCP", 00:14:20.306 "adrfam": "IPv4", 00:14:20.306 "traddr": "10.0.0.2", 00:14:20.306 "trsvcid": "4420" 00:14:20.306 }, 00:14:20.306 "peer_address": { 00:14:20.306 "trtype": "TCP", 00:14:20.306 "adrfam": "IPv4", 00:14:20.306 "traddr": "10.0.0.1", 00:14:20.306 "trsvcid": "42966" 00:14:20.306 }, 00:14:20.306 "auth": { 00:14:20.306 "state": "completed", 00:14:20.306 "digest": "sha256", 00:14:20.306 "dhgroup": "ffdhe2048" 00:14:20.306 } 00:14:20.306 } 00:14:20.306 ]' 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.306 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.564 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.564 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.564 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.564 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.564 16:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.822 16:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:21.756 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.757 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.015 16:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.016 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.016 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.273 00:14:22.273 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.273 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.273 16:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.532 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.532 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.532 16:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.532 16:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.791 { 00:14:22.791 "cntlid": 11, 00:14:22.791 "qid": 0, 00:14:22.791 "state": "enabled", 00:14:22.791 "thread": "nvmf_tgt_poll_group_000", 00:14:22.791 "listen_address": { 00:14:22.791 "trtype": "TCP", 00:14:22.791 "adrfam": "IPv4", 00:14:22.791 "traddr": "10.0.0.2", 00:14:22.791 "trsvcid": "4420" 00:14:22.791 }, 00:14:22.791 "peer_address": { 00:14:22.791 "trtype": "TCP", 00:14:22.791 "adrfam": "IPv4", 00:14:22.791 "traddr": "10.0.0.1", 00:14:22.791 "trsvcid": "37868" 00:14:22.791 }, 00:14:22.791 "auth": { 00:14:22.791 "state": "completed", 00:14:22.791 "digest": "sha256", 00:14:22.791 "dhgroup": "ffdhe2048" 00:14:22.791 } 00:14:22.791 } 00:14:22.791 ]' 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.791 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.049 16:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.985 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.242 16:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.500 00:14:24.500 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.500 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.500 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.757 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.757 { 00:14:24.757 "cntlid": 13, 00:14:24.757 "qid": 0, 00:14:24.757 "state": "enabled", 00:14:24.757 "thread": "nvmf_tgt_poll_group_000", 00:14:24.758 "listen_address": { 00:14:24.758 "trtype": "TCP", 00:14:24.758 "adrfam": "IPv4", 00:14:24.758 "traddr": "10.0.0.2", 00:14:24.758 "trsvcid": "4420" 00:14:24.758 }, 00:14:24.758 "peer_address": { 00:14:24.758 "trtype": "TCP", 00:14:24.758 "adrfam": "IPv4", 00:14:24.758 "traddr": "10.0.0.1", 00:14:24.758 "trsvcid": "37902" 00:14:24.758 }, 00:14:24.758 "auth": { 00:14:24.758 "state": "completed", 00:14:24.758 "digest": "sha256", 00:14:24.758 "dhgroup": "ffdhe2048" 00:14:24.758 } 00:14:24.758 } 00:14:24.758 ]' 00:14:24.758 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.016 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.272 16:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.211 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.469 16:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.727 00:14:26.727 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.727 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.727 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.985 { 00:14:26.985 "cntlid": 15, 00:14:26.985 "qid": 0, 00:14:26.985 "state": "enabled", 00:14:26.985 "thread": "nvmf_tgt_poll_group_000", 00:14:26.985 "listen_address": { 00:14:26.985 "trtype": "TCP", 00:14:26.985 "adrfam": "IPv4", 00:14:26.985 "traddr": "10.0.0.2", 00:14:26.985 "trsvcid": "4420" 00:14:26.985 }, 00:14:26.985 "peer_address": { 00:14:26.985 "trtype": "TCP", 00:14:26.985 "adrfam": "IPv4", 00:14:26.985 "traddr": "10.0.0.1", 00:14:26.985 "trsvcid": "37924" 00:14:26.985 }, 00:14:26.985 "auth": { 00:14:26.985 "state": "completed", 00:14:26.985 "digest": "sha256", 00:14:26.985 "dhgroup": "ffdhe2048" 00:14:26.985 } 00:14:26.985 } 00:14:26.985 ]' 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.985 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.245 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.245 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.245 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.505 16:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.478 16:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.737 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.996 00:14:28.996 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.996 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.996 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.255 { 00:14:29.255 "cntlid": 17, 00:14:29.255 "qid": 0, 00:14:29.255 "state": "enabled", 00:14:29.255 "thread": "nvmf_tgt_poll_group_000", 00:14:29.255 "listen_address": { 00:14:29.255 "trtype": "TCP", 00:14:29.255 "adrfam": "IPv4", 00:14:29.255 "traddr": "10.0.0.2", 00:14:29.255 "trsvcid": "4420" 00:14:29.255 }, 00:14:29.255 "peer_address": { 00:14:29.255 "trtype": "TCP", 00:14:29.255 "adrfam": "IPv4", 00:14:29.255 "traddr": "10.0.0.1", 00:14:29.255 "trsvcid": "37968" 00:14:29.255 }, 00:14:29.255 "auth": { 00:14:29.255 "state": "completed", 00:14:29.255 "digest": "sha256", 00:14:29.255 "dhgroup": "ffdhe3072" 00:14:29.255 } 00:14:29.255 } 00:14:29.255 ]' 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.255 16:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.512 16:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:30.450 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.711 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.970 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.228 00:14:31.228 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.228 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.228 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.486 { 00:14:31.486 "cntlid": 19, 00:14:31.486 "qid": 0, 00:14:31.486 "state": "enabled", 00:14:31.486 "thread": "nvmf_tgt_poll_group_000", 00:14:31.486 "listen_address": { 00:14:31.486 "trtype": "TCP", 00:14:31.486 "adrfam": "IPv4", 00:14:31.486 "traddr": "10.0.0.2", 00:14:31.486 "trsvcid": "4420" 00:14:31.486 }, 00:14:31.486 "peer_address": { 00:14:31.486 "trtype": "TCP", 00:14:31.486 "adrfam": "IPv4", 00:14:31.486 "traddr": "10.0.0.1", 00:14:31.486 "trsvcid": "38000" 00:14:31.486 }, 00:14:31.486 "auth": { 00:14:31.486 "state": "completed", 00:14:31.486 "digest": "sha256", 00:14:31.486 "dhgroup": "ffdhe3072" 00:14:31.486 } 00:14:31.486 } 00:14:31.486 ]' 00:14:31.486 16:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.486 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.486 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.486 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.486 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.744 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.744 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.744 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.001 16:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.938 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.196 16:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.455 00:14:33.455 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.455 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.455 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.713 { 00:14:33.713 "cntlid": 21, 00:14:33.713 "qid": 0, 00:14:33.713 "state": "enabled", 00:14:33.713 "thread": "nvmf_tgt_poll_group_000", 00:14:33.713 "listen_address": { 00:14:33.713 "trtype": "TCP", 00:14:33.713 "adrfam": "IPv4", 00:14:33.713 "traddr": "10.0.0.2", 00:14:33.713 "trsvcid": "4420" 00:14:33.713 }, 00:14:33.713 "peer_address": { 00:14:33.713 "trtype": "TCP", 00:14:33.713 "adrfam": "IPv4", 00:14:33.713 "traddr": "10.0.0.1", 00:14:33.713 "trsvcid": "37132" 00:14:33.713 }, 00:14:33.713 "auth": { 00:14:33.713 "state": "completed", 00:14:33.713 "digest": "sha256", 00:14:33.713 "dhgroup": "ffdhe3072" 00:14:33.713 } 00:14:33.713 } 00:14:33.713 ]' 00:14:33.713 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.971 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.229 16:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.164 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.422 16:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.681 00:14:35.681 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.681 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.681 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.939 { 00:14:35.939 "cntlid": 23, 00:14:35.939 "qid": 0, 00:14:35.939 "state": "enabled", 00:14:35.939 "thread": "nvmf_tgt_poll_group_000", 00:14:35.939 "listen_address": { 00:14:35.939 "trtype": "TCP", 00:14:35.939 "adrfam": "IPv4", 00:14:35.939 "traddr": "10.0.0.2", 00:14:35.939 "trsvcid": "4420" 00:14:35.939 }, 00:14:35.939 "peer_address": { 00:14:35.939 "trtype": "TCP", 00:14:35.939 "adrfam": "IPv4", 00:14:35.939 "traddr": "10.0.0.1", 00:14:35.939 "trsvcid": "37160" 00:14:35.939 }, 00:14:35.939 "auth": { 00:14:35.939 "state": "completed", 00:14:35.939 "digest": "sha256", 00:14:35.939 "dhgroup": "ffdhe3072" 00:14:35.939 } 00:14:35.939 } 00:14:35.939 ]' 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.939 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.197 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.197 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.197 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.197 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.197 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.456 16:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.393 16:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.651 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.219 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.219 { 00:14:38.219 "cntlid": 25, 00:14:38.219 "qid": 0, 00:14:38.219 "state": "enabled", 00:14:38.219 "thread": "nvmf_tgt_poll_group_000", 00:14:38.219 "listen_address": { 00:14:38.219 "trtype": "TCP", 00:14:38.219 "adrfam": "IPv4", 00:14:38.219 "traddr": "10.0.0.2", 00:14:38.219 "trsvcid": "4420" 00:14:38.219 }, 00:14:38.219 "peer_address": { 00:14:38.219 "trtype": "TCP", 00:14:38.219 "adrfam": "IPv4", 00:14:38.219 "traddr": "10.0.0.1", 00:14:38.219 "trsvcid": "37186" 00:14:38.219 }, 00:14:38.219 "auth": { 00:14:38.219 "state": "completed", 00:14:38.219 "digest": "sha256", 00:14:38.219 "dhgroup": "ffdhe4096" 00:14:38.219 } 00:14:38.219 } 00:14:38.219 ]' 00:14:38.219 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.476 16:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.734 16:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.668 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.945 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.510 00:14:40.510 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.510 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.510 16:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.510 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.510 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.510 16:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.510 16:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.766 { 00:14:40.766 "cntlid": 27, 00:14:40.766 "qid": 0, 00:14:40.766 "state": "enabled", 00:14:40.766 "thread": "nvmf_tgt_poll_group_000", 00:14:40.766 "listen_address": { 00:14:40.766 "trtype": "TCP", 00:14:40.766 "adrfam": "IPv4", 00:14:40.766 "traddr": "10.0.0.2", 00:14:40.766 "trsvcid": "4420" 00:14:40.766 }, 00:14:40.766 "peer_address": { 00:14:40.766 "trtype": "TCP", 00:14:40.766 "adrfam": "IPv4", 00:14:40.766 "traddr": "10.0.0.1", 00:14:40.766 "trsvcid": "37214" 00:14:40.766 }, 00:14:40.766 "auth": { 00:14:40.766 "state": "completed", 00:14:40.766 "digest": "sha256", 00:14:40.766 "dhgroup": "ffdhe4096" 00:14:40.766 } 00:14:40.766 } 00:14:40.766 ]' 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.766 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.023 16:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.957 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.247 16:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.814 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.814 { 00:14:42.814 "cntlid": 29, 00:14:42.814 "qid": 0, 00:14:42.814 "state": "enabled", 00:14:42.814 "thread": "nvmf_tgt_poll_group_000", 00:14:42.814 "listen_address": { 00:14:42.814 "trtype": "TCP", 00:14:42.814 "adrfam": "IPv4", 00:14:42.814 "traddr": "10.0.0.2", 00:14:42.814 "trsvcid": "4420" 00:14:42.814 }, 00:14:42.814 "peer_address": { 00:14:42.814 "trtype": "TCP", 00:14:42.814 "adrfam": "IPv4", 00:14:42.814 "traddr": "10.0.0.1", 00:14:42.814 "trsvcid": "51586" 00:14:42.814 }, 00:14:42.814 "auth": { 00:14:42.814 "state": "completed", 00:14:42.814 "digest": "sha256", 00:14:42.814 "dhgroup": "ffdhe4096" 00:14:42.814 } 00:14:42.814 } 00:14:42.814 ]' 00:14:42.814 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.072 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.330 16:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.266 16:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.523 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.091 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.091 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.349 { 00:14:45.349 "cntlid": 31, 00:14:45.349 "qid": 0, 00:14:45.349 "state": "enabled", 00:14:45.349 "thread": "nvmf_tgt_poll_group_000", 00:14:45.349 "listen_address": { 00:14:45.349 "trtype": "TCP", 00:14:45.349 "adrfam": "IPv4", 00:14:45.349 "traddr": "10.0.0.2", 00:14:45.349 "trsvcid": "4420" 00:14:45.349 }, 00:14:45.349 "peer_address": { 00:14:45.349 "trtype": "TCP", 00:14:45.349 "adrfam": "IPv4", 00:14:45.349 "traddr": "10.0.0.1", 00:14:45.349 "trsvcid": "51620" 00:14:45.349 }, 00:14:45.349 "auth": { 00:14:45.349 "state": "completed", 00:14:45.349 "digest": "sha256", 00:14:45.349 "dhgroup": "ffdhe4096" 00:14:45.349 } 00:14:45.349 } 00:14:45.349 ]' 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.349 16:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.607 16:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:14:46.542 16:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.542 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.820 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.388 00:14:47.388 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.388 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.388 16:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.646 { 00:14:47.646 "cntlid": 33, 00:14:47.646 "qid": 0, 00:14:47.646 "state": "enabled", 00:14:47.646 "thread": "nvmf_tgt_poll_group_000", 00:14:47.646 "listen_address": { 00:14:47.646 "trtype": "TCP", 00:14:47.646 "adrfam": "IPv4", 00:14:47.646 "traddr": "10.0.0.2", 00:14:47.646 "trsvcid": "4420" 00:14:47.646 }, 00:14:47.646 "peer_address": { 00:14:47.646 "trtype": "TCP", 00:14:47.646 "adrfam": "IPv4", 00:14:47.646 "traddr": "10.0.0.1", 00:14:47.646 "trsvcid": "51654" 00:14:47.646 }, 00:14:47.646 "auth": { 00:14:47.646 "state": "completed", 00:14:47.646 "digest": "sha256", 00:14:47.646 "dhgroup": "ffdhe6144" 00:14:47.646 } 00:14:47.646 } 00:14:47.646 ]' 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.646 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.903 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.903 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.903 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.161 16:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.096 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.353 16:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.922 00:14:49.922 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.922 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.922 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.182 { 00:14:50.182 "cntlid": 35, 00:14:50.182 "qid": 0, 00:14:50.182 "state": "enabled", 00:14:50.182 "thread": "nvmf_tgt_poll_group_000", 00:14:50.182 "listen_address": { 00:14:50.182 "trtype": "TCP", 00:14:50.182 "adrfam": "IPv4", 00:14:50.182 "traddr": "10.0.0.2", 00:14:50.182 "trsvcid": "4420" 00:14:50.182 }, 00:14:50.182 "peer_address": { 00:14:50.182 "trtype": "TCP", 00:14:50.182 "adrfam": "IPv4", 00:14:50.182 "traddr": "10.0.0.1", 00:14:50.182 "trsvcid": "51670" 00:14:50.182 }, 00:14:50.182 "auth": { 00:14:50.182 "state": "completed", 00:14:50.182 "digest": "sha256", 00:14:50.182 "dhgroup": "ffdhe6144" 00:14:50.182 } 00:14:50.182 } 00:14:50.182 ]' 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.182 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.441 16:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.376 16:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.633 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.199 00:14:52.199 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.199 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.199 16:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.457 { 00:14:52.457 "cntlid": 37, 00:14:52.457 "qid": 0, 00:14:52.457 "state": "enabled", 00:14:52.457 "thread": "nvmf_tgt_poll_group_000", 00:14:52.457 "listen_address": { 00:14:52.457 "trtype": "TCP", 00:14:52.457 "adrfam": "IPv4", 00:14:52.457 "traddr": "10.0.0.2", 00:14:52.457 "trsvcid": "4420" 00:14:52.457 }, 00:14:52.457 "peer_address": { 00:14:52.457 "trtype": "TCP", 00:14:52.457 "adrfam": "IPv4", 00:14:52.457 "traddr": "10.0.0.1", 00:14:52.457 "trsvcid": "44846" 00:14:52.457 }, 00:14:52.457 "auth": { 00:14:52.457 "state": "completed", 00:14:52.457 "digest": "sha256", 00:14:52.457 "dhgroup": "ffdhe6144" 00:14:52.457 } 00:14:52.457 } 00:14:52.457 ]' 00:14:52.457 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.716 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.974 16:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.908 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.166 16:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.732 00:14:54.732 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.732 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.732 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.992 { 00:14:54.992 "cntlid": 39, 00:14:54.992 "qid": 0, 00:14:54.992 "state": "enabled", 00:14:54.992 "thread": "nvmf_tgt_poll_group_000", 00:14:54.992 "listen_address": { 00:14:54.992 "trtype": "TCP", 00:14:54.992 "adrfam": "IPv4", 00:14:54.992 "traddr": "10.0.0.2", 00:14:54.992 "trsvcid": "4420" 00:14:54.992 }, 00:14:54.992 "peer_address": { 00:14:54.992 "trtype": "TCP", 00:14:54.992 "adrfam": "IPv4", 00:14:54.992 "traddr": "10.0.0.1", 00:14:54.992 "trsvcid": "44872" 00:14:54.992 }, 00:14:54.992 "auth": { 00:14:54.992 "state": "completed", 00:14:54.992 "digest": "sha256", 00:14:54.992 "dhgroup": "ffdhe6144" 00:14:54.992 } 00:14:54.992 } 00:14:54.992 ]' 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.992 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.252 16:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.226 16:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.509 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.446 00:14:57.446 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.446 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.446 16:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.703 { 00:14:57.703 "cntlid": 41, 00:14:57.703 "qid": 0, 00:14:57.703 "state": "enabled", 00:14:57.703 "thread": "nvmf_tgt_poll_group_000", 00:14:57.703 "listen_address": { 00:14:57.703 "trtype": "TCP", 00:14:57.703 "adrfam": "IPv4", 00:14:57.703 "traddr": "10.0.0.2", 00:14:57.703 "trsvcid": "4420" 00:14:57.703 }, 00:14:57.703 "peer_address": { 00:14:57.703 "trtype": "TCP", 00:14:57.703 "adrfam": "IPv4", 00:14:57.703 "traddr": "10.0.0.1", 00:14:57.703 "trsvcid": "44908" 00:14:57.703 }, 00:14:57.703 "auth": { 00:14:57.703 "state": "completed", 00:14:57.703 "digest": "sha256", 00:14:57.703 "dhgroup": "ffdhe8192" 00:14:57.703 } 00:14:57.703 } 00:14:57.703 ]' 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.703 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.960 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.960 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.960 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.960 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.960 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.217 16:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.150 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.408 16:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.344 00:15:00.344 16:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.344 16:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.344 16:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.601 { 00:15:00.601 "cntlid": 43, 00:15:00.601 "qid": 0, 00:15:00.601 "state": "enabled", 00:15:00.601 "thread": "nvmf_tgt_poll_group_000", 00:15:00.601 "listen_address": { 00:15:00.601 "trtype": "TCP", 00:15:00.601 "adrfam": "IPv4", 00:15:00.601 "traddr": "10.0.0.2", 00:15:00.601 "trsvcid": "4420" 00:15:00.601 }, 00:15:00.601 "peer_address": { 00:15:00.601 "trtype": "TCP", 00:15:00.601 "adrfam": "IPv4", 00:15:00.601 "traddr": "10.0.0.1", 00:15:00.601 "trsvcid": "44940" 00:15:00.601 }, 00:15:00.601 "auth": { 00:15:00.601 "state": "completed", 00:15:00.601 "digest": "sha256", 00:15:00.601 "dhgroup": "ffdhe8192" 00:15:00.601 } 00:15:00.601 } 00:15:00.601 ]' 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.601 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.860 16:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.794 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.052 16:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.989 00:15:02.989 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.989 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.989 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.247 { 00:15:03.247 "cntlid": 45, 00:15:03.247 "qid": 0, 00:15:03.247 "state": "enabled", 00:15:03.247 "thread": "nvmf_tgt_poll_group_000", 00:15:03.247 "listen_address": { 00:15:03.247 "trtype": "TCP", 00:15:03.247 "adrfam": "IPv4", 00:15:03.247 "traddr": "10.0.0.2", 00:15:03.247 "trsvcid": "4420" 00:15:03.247 }, 00:15:03.247 "peer_address": { 00:15:03.247 "trtype": "TCP", 00:15:03.247 "adrfam": "IPv4", 00:15:03.247 "traddr": "10.0.0.1", 00:15:03.247 "trsvcid": "38900" 00:15:03.247 }, 00:15:03.247 "auth": { 00:15:03.247 "state": "completed", 00:15:03.247 "digest": "sha256", 00:15:03.247 "dhgroup": "ffdhe8192" 00:15:03.247 } 00:15:03.247 } 00:15:03.247 ]' 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.247 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.528 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.528 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.528 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.528 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.528 16:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.787 16:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.725 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.983 16:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.920 00:15:05.920 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.920 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.920 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.178 { 00:15:06.178 "cntlid": 47, 00:15:06.178 "qid": 0, 00:15:06.178 "state": "enabled", 00:15:06.178 "thread": "nvmf_tgt_poll_group_000", 00:15:06.178 "listen_address": { 00:15:06.178 "trtype": "TCP", 00:15:06.178 "adrfam": "IPv4", 00:15:06.178 "traddr": "10.0.0.2", 00:15:06.178 "trsvcid": "4420" 00:15:06.178 }, 00:15:06.178 "peer_address": { 00:15:06.178 "trtype": "TCP", 00:15:06.178 "adrfam": "IPv4", 00:15:06.178 "traddr": "10.0.0.1", 00:15:06.178 "trsvcid": "38940" 00:15:06.178 }, 00:15:06.178 "auth": { 00:15:06.178 "state": "completed", 00:15:06.178 "digest": "sha256", 00:15:06.178 "dhgroup": "ffdhe8192" 00:15:06.178 } 00:15:06.178 } 00:15:06.178 ]' 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.178 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.437 16:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.374 16:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.633 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.891 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.149 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.406 16:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.406 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.406 { 00:15:08.406 "cntlid": 49, 00:15:08.406 "qid": 0, 00:15:08.406 "state": "enabled", 00:15:08.406 "thread": "nvmf_tgt_poll_group_000", 00:15:08.406 "listen_address": { 00:15:08.406 "trtype": "TCP", 00:15:08.406 "adrfam": "IPv4", 00:15:08.406 "traddr": "10.0.0.2", 00:15:08.406 "trsvcid": "4420" 00:15:08.406 }, 00:15:08.406 "peer_address": { 00:15:08.406 "trtype": "TCP", 00:15:08.406 "adrfam": "IPv4", 00:15:08.406 "traddr": "10.0.0.1", 00:15:08.406 "trsvcid": "38976" 00:15:08.406 }, 00:15:08.406 "auth": { 00:15:08.407 "state": "completed", 00:15:08.407 "digest": "sha384", 00:15:08.407 "dhgroup": "null" 00:15:08.407 } 00:15:08.407 } 00:15:08.407 ]' 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.407 16:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.664 16:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.597 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.856 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.115 00:15:10.115 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.115 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.115 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.408 { 00:15:10.408 "cntlid": 51, 00:15:10.408 "qid": 0, 00:15:10.408 "state": "enabled", 00:15:10.408 "thread": "nvmf_tgt_poll_group_000", 00:15:10.408 "listen_address": { 00:15:10.408 "trtype": "TCP", 00:15:10.408 "adrfam": "IPv4", 00:15:10.408 "traddr": "10.0.0.2", 00:15:10.408 "trsvcid": "4420" 00:15:10.408 }, 00:15:10.408 "peer_address": { 00:15:10.408 "trtype": "TCP", 00:15:10.408 "adrfam": "IPv4", 00:15:10.408 "traddr": "10.0.0.1", 00:15:10.408 "trsvcid": "38998" 00:15:10.408 }, 00:15:10.408 "auth": { 00:15:10.408 "state": "completed", 00:15:10.408 "digest": "sha384", 00:15:10.408 "dhgroup": "null" 00:15:10.408 } 00:15:10.408 } 00:15:10.408 ]' 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.408 16:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.667 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.667 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.667 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.667 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.667 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.925 16:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.862 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.120 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.379 00:15:12.379 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.379 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.379 16:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.636 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.636 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.636 16:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.636 16:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.637 { 00:15:12.637 "cntlid": 53, 00:15:12.637 "qid": 0, 00:15:12.637 "state": "enabled", 00:15:12.637 "thread": "nvmf_tgt_poll_group_000", 00:15:12.637 "listen_address": { 00:15:12.637 "trtype": "TCP", 00:15:12.637 "adrfam": "IPv4", 00:15:12.637 "traddr": "10.0.0.2", 00:15:12.637 "trsvcid": "4420" 00:15:12.637 }, 00:15:12.637 "peer_address": { 00:15:12.637 "trtype": "TCP", 00:15:12.637 "adrfam": "IPv4", 00:15:12.637 "traddr": "10.0.0.1", 00:15:12.637 "trsvcid": "57204" 00:15:12.637 }, 00:15:12.637 "auth": { 00:15:12.637 "state": "completed", 00:15:12.637 "digest": "sha384", 00:15:12.637 "dhgroup": "null" 00:15:12.637 } 00:15:12.637 } 00:15:12.637 ]' 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.637 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.895 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.895 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.895 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.154 16:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.097 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.355 16:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.613 00:15:14.613 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.613 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.613 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.871 { 00:15:14.871 "cntlid": 55, 00:15:14.871 "qid": 0, 00:15:14.871 "state": "enabled", 00:15:14.871 "thread": "nvmf_tgt_poll_group_000", 00:15:14.871 "listen_address": { 00:15:14.871 "trtype": "TCP", 00:15:14.871 "adrfam": "IPv4", 00:15:14.871 "traddr": "10.0.0.2", 00:15:14.871 "trsvcid": "4420" 00:15:14.871 }, 00:15:14.871 "peer_address": { 00:15:14.871 "trtype": "TCP", 00:15:14.871 "adrfam": "IPv4", 00:15:14.871 "traddr": "10.0.0.1", 00:15:14.871 "trsvcid": "57244" 00:15:14.871 }, 00:15:14.871 "auth": { 00:15:14.871 "state": "completed", 00:15:14.871 "digest": "sha384", 00:15:14.871 "dhgroup": "null" 00:15:14.871 } 00:15:14.871 } 00:15:14.871 ]' 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.871 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.130 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.130 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.130 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.388 16:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.322 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.581 16:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.581 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.581 16:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.839 00:15:16.839 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.839 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.839 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.098 { 00:15:17.098 "cntlid": 57, 00:15:17.098 "qid": 0, 00:15:17.098 "state": "enabled", 00:15:17.098 "thread": "nvmf_tgt_poll_group_000", 00:15:17.098 "listen_address": { 00:15:17.098 "trtype": "TCP", 00:15:17.098 "adrfam": "IPv4", 00:15:17.098 "traddr": "10.0.0.2", 00:15:17.098 "trsvcid": "4420" 00:15:17.098 }, 00:15:17.098 "peer_address": { 00:15:17.098 "trtype": "TCP", 00:15:17.098 "adrfam": "IPv4", 00:15:17.098 "traddr": "10.0.0.1", 00:15:17.098 "trsvcid": "57274" 00:15:17.098 }, 00:15:17.098 "auth": { 00:15:17.098 "state": "completed", 00:15:17.098 "digest": "sha384", 00:15:17.098 "dhgroup": "ffdhe2048" 00:15:17.098 } 00:15:17.098 } 00:15:17.098 ]' 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.098 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.356 16:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.290 16:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.547 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.804 00:15:18.805 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.805 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.805 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.063 { 00:15:19.063 "cntlid": 59, 00:15:19.063 "qid": 0, 00:15:19.063 "state": "enabled", 00:15:19.063 "thread": "nvmf_tgt_poll_group_000", 00:15:19.063 "listen_address": { 00:15:19.063 "trtype": "TCP", 00:15:19.063 "adrfam": "IPv4", 00:15:19.063 "traddr": "10.0.0.2", 00:15:19.063 "trsvcid": "4420" 00:15:19.063 }, 00:15:19.063 "peer_address": { 00:15:19.063 "trtype": "TCP", 00:15:19.063 "adrfam": "IPv4", 00:15:19.063 "traddr": "10.0.0.1", 00:15:19.063 "trsvcid": "57298" 00:15:19.063 }, 00:15:19.063 "auth": { 00:15:19.063 "state": "completed", 00:15:19.063 "digest": "sha384", 00:15:19.063 "dhgroup": "ffdhe2048" 00:15:19.063 } 00:15:19.063 } 00:15:19.063 ]' 00:15:19.063 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.320 16:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.578 16:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.514 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.771 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.336 00:15:21.336 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.336 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.336 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.336 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.593 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.593 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.593 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.593 16:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.593 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.593 { 00:15:21.593 "cntlid": 61, 00:15:21.593 "qid": 0, 00:15:21.593 "state": "enabled", 00:15:21.593 "thread": "nvmf_tgt_poll_group_000", 00:15:21.593 "listen_address": { 00:15:21.593 "trtype": "TCP", 00:15:21.593 "adrfam": "IPv4", 00:15:21.593 "traddr": "10.0.0.2", 00:15:21.593 "trsvcid": "4420" 00:15:21.593 }, 00:15:21.593 "peer_address": { 00:15:21.593 "trtype": "TCP", 00:15:21.593 "adrfam": "IPv4", 00:15:21.593 "traddr": "10.0.0.1", 00:15:21.594 "trsvcid": "57324" 00:15:21.594 }, 00:15:21.594 "auth": { 00:15:21.594 "state": "completed", 00:15:21.594 "digest": "sha384", 00:15:21.594 "dhgroup": "ffdhe2048" 00:15:21.594 } 00:15:21.594 } 00:15:21.594 ]' 00:15:21.594 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.594 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.594 16:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.594 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.594 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.594 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.594 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.594 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.850 16:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.782 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.039 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.603 00:15:23.603 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.603 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.603 16:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.859 { 00:15:23.859 "cntlid": 63, 00:15:23.859 "qid": 0, 00:15:23.859 "state": "enabled", 00:15:23.859 "thread": "nvmf_tgt_poll_group_000", 00:15:23.859 "listen_address": { 00:15:23.859 "trtype": "TCP", 00:15:23.859 "adrfam": "IPv4", 00:15:23.859 "traddr": "10.0.0.2", 00:15:23.859 "trsvcid": "4420" 00:15:23.859 }, 00:15:23.859 "peer_address": { 00:15:23.859 "trtype": "TCP", 00:15:23.859 "adrfam": "IPv4", 00:15:23.859 "traddr": "10.0.0.1", 00:15:23.859 "trsvcid": "36474" 00:15:23.859 }, 00:15:23.859 "auth": { 00:15:23.859 "state": "completed", 00:15:23.859 "digest": "sha384", 00:15:23.859 "dhgroup": "ffdhe2048" 00:15:23.859 } 00:15:23.859 } 00:15:23.859 ]' 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.859 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.137 16:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.072 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.329 16:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.896 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.896 16:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.154 { 00:15:26.154 "cntlid": 65, 00:15:26.154 "qid": 0, 00:15:26.154 "state": "enabled", 00:15:26.154 "thread": "nvmf_tgt_poll_group_000", 00:15:26.154 "listen_address": { 00:15:26.154 "trtype": "TCP", 00:15:26.154 "adrfam": "IPv4", 00:15:26.154 "traddr": "10.0.0.2", 00:15:26.154 "trsvcid": "4420" 00:15:26.154 }, 00:15:26.154 "peer_address": { 00:15:26.154 "trtype": "TCP", 00:15:26.154 "adrfam": "IPv4", 00:15:26.154 "traddr": "10.0.0.1", 00:15:26.154 "trsvcid": "36494" 00:15:26.154 }, 00:15:26.154 "auth": { 00:15:26.154 "state": "completed", 00:15:26.154 "digest": "sha384", 00:15:26.154 "dhgroup": "ffdhe3072" 00:15:26.154 } 00:15:26.154 } 00:15:26.154 ]' 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.154 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.413 16:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.348 16:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.918 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.176 00:15:28.176 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.176 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.176 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.434 { 00:15:28.434 "cntlid": 67, 00:15:28.434 "qid": 0, 00:15:28.434 "state": "enabled", 00:15:28.434 "thread": "nvmf_tgt_poll_group_000", 00:15:28.434 "listen_address": { 00:15:28.434 "trtype": "TCP", 00:15:28.434 "adrfam": "IPv4", 00:15:28.434 "traddr": "10.0.0.2", 00:15:28.434 "trsvcid": "4420" 00:15:28.434 }, 00:15:28.434 "peer_address": { 00:15:28.434 "trtype": "TCP", 00:15:28.434 "adrfam": "IPv4", 00:15:28.434 "traddr": "10.0.0.1", 00:15:28.434 "trsvcid": "36518" 00:15:28.434 }, 00:15:28.434 "auth": { 00:15:28.434 "state": "completed", 00:15:28.434 "digest": "sha384", 00:15:28.434 "dhgroup": "ffdhe3072" 00:15:28.434 } 00:15:28.434 } 00:15:28.434 ]' 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.434 16:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.693 16:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.628 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.918 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.486 00:15:30.486 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.486 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.486 16:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.743 { 00:15:30.743 "cntlid": 69, 00:15:30.743 "qid": 0, 00:15:30.743 "state": "enabled", 00:15:30.743 "thread": "nvmf_tgt_poll_group_000", 00:15:30.743 "listen_address": { 00:15:30.743 "trtype": "TCP", 00:15:30.743 "adrfam": "IPv4", 00:15:30.743 "traddr": "10.0.0.2", 00:15:30.743 "trsvcid": "4420" 00:15:30.743 }, 00:15:30.743 "peer_address": { 00:15:30.743 "trtype": "TCP", 00:15:30.743 "adrfam": "IPv4", 00:15:30.743 "traddr": "10.0.0.1", 00:15:30.743 "trsvcid": "36554" 00:15:30.743 }, 00:15:30.743 "auth": { 00:15:30.743 "state": "completed", 00:15:30.743 "digest": "sha384", 00:15:30.743 "dhgroup": "ffdhe3072" 00:15:30.743 } 00:15:30.743 } 00:15:30.743 ]' 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.743 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.003 16:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.376 16:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.377 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.377 16:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.635 00:15:32.635 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.635 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.635 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.892 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.892 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.892 16:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.893 16:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.150 16:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.150 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.150 { 00:15:33.150 "cntlid": 71, 00:15:33.150 "qid": 0, 00:15:33.151 "state": "enabled", 00:15:33.151 "thread": "nvmf_tgt_poll_group_000", 00:15:33.151 "listen_address": { 00:15:33.151 "trtype": "TCP", 00:15:33.151 "adrfam": "IPv4", 00:15:33.151 "traddr": "10.0.0.2", 00:15:33.151 "trsvcid": "4420" 00:15:33.151 }, 00:15:33.151 "peer_address": { 00:15:33.151 "trtype": "TCP", 00:15:33.151 "adrfam": "IPv4", 00:15:33.151 "traddr": "10.0.0.1", 00:15:33.151 "trsvcid": "39472" 00:15:33.151 }, 00:15:33.151 "auth": { 00:15:33.151 "state": "completed", 00:15:33.151 "digest": "sha384", 00:15:33.151 "dhgroup": "ffdhe3072" 00:15:33.151 } 00:15:33.151 } 00:15:33.151 ]' 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.151 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.408 16:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.343 16:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.601 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.170 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.170 { 00:15:35.170 "cntlid": 73, 00:15:35.170 "qid": 0, 00:15:35.170 "state": "enabled", 00:15:35.170 "thread": "nvmf_tgt_poll_group_000", 00:15:35.170 "listen_address": { 00:15:35.170 "trtype": "TCP", 00:15:35.170 "adrfam": "IPv4", 00:15:35.170 "traddr": "10.0.0.2", 00:15:35.170 "trsvcid": "4420" 00:15:35.170 }, 00:15:35.170 "peer_address": { 00:15:35.170 "trtype": "TCP", 00:15:35.170 "adrfam": "IPv4", 00:15:35.170 "traddr": "10.0.0.1", 00:15:35.170 "trsvcid": "39498" 00:15:35.170 }, 00:15:35.170 "auth": { 00:15:35.170 "state": "completed", 00:15:35.170 "digest": "sha384", 00:15:35.170 "dhgroup": "ffdhe4096" 00:15:35.170 } 00:15:35.170 } 00:15:35.170 ]' 00:15:35.170 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.429 16:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.687 16:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:36.625 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.626 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.884 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.453 00:15:37.453 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.453 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.453 16:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.719 { 00:15:37.719 "cntlid": 75, 00:15:37.719 "qid": 0, 00:15:37.719 "state": "enabled", 00:15:37.719 "thread": "nvmf_tgt_poll_group_000", 00:15:37.719 "listen_address": { 00:15:37.719 "trtype": "TCP", 00:15:37.719 "adrfam": "IPv4", 00:15:37.719 "traddr": "10.0.0.2", 00:15:37.719 "trsvcid": "4420" 00:15:37.719 }, 00:15:37.719 "peer_address": { 00:15:37.719 "trtype": "TCP", 00:15:37.719 "adrfam": "IPv4", 00:15:37.719 "traddr": "10.0.0.1", 00:15:37.719 "trsvcid": "39516" 00:15:37.719 }, 00:15:37.719 "auth": { 00:15:37.719 "state": "completed", 00:15:37.719 "digest": "sha384", 00:15:37.719 "dhgroup": "ffdhe4096" 00:15:37.719 } 00:15:37.719 } 00:15:37.719 ]' 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.719 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.004 16:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.941 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.199 16:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.459 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.717 16:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.975 { 00:15:39.975 "cntlid": 77, 00:15:39.975 "qid": 0, 00:15:39.975 "state": "enabled", 00:15:39.975 "thread": "nvmf_tgt_poll_group_000", 00:15:39.975 "listen_address": { 00:15:39.975 "trtype": "TCP", 00:15:39.975 "adrfam": "IPv4", 00:15:39.975 "traddr": "10.0.0.2", 00:15:39.975 "trsvcid": "4420" 00:15:39.975 }, 00:15:39.975 "peer_address": { 00:15:39.975 "trtype": "TCP", 00:15:39.975 "adrfam": "IPv4", 00:15:39.975 "traddr": "10.0.0.1", 00:15:39.975 "trsvcid": "39536" 00:15:39.975 }, 00:15:39.975 "auth": { 00:15:39.975 "state": "completed", 00:15:39.975 "digest": "sha384", 00:15:39.975 "dhgroup": "ffdhe4096" 00:15:39.975 } 00:15:39.975 } 00:15:39.975 ]' 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.975 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.233 16:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.169 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.427 16:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.995 00:15:41.995 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.995 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.995 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.253 { 00:15:42.253 "cntlid": 79, 00:15:42.253 "qid": 0, 00:15:42.253 "state": "enabled", 00:15:42.253 "thread": "nvmf_tgt_poll_group_000", 00:15:42.253 "listen_address": { 00:15:42.253 "trtype": "TCP", 00:15:42.253 "adrfam": "IPv4", 00:15:42.253 "traddr": "10.0.0.2", 00:15:42.253 "trsvcid": "4420" 00:15:42.253 }, 00:15:42.253 "peer_address": { 00:15:42.253 "trtype": "TCP", 00:15:42.253 "adrfam": "IPv4", 00:15:42.253 "traddr": "10.0.0.1", 00:15:42.253 "trsvcid": "47264" 00:15:42.253 }, 00:15:42.253 "auth": { 00:15:42.253 "state": "completed", 00:15:42.253 "digest": "sha384", 00:15:42.253 "dhgroup": "ffdhe4096" 00:15:42.253 } 00:15:42.253 } 00:15:42.253 ]' 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.253 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.512 16:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:43.445 16:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.703 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.267 00:15:44.267 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.267 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.267 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.525 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.525 16:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.525 16:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.525 16:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.525 { 00:15:44.525 "cntlid": 81, 00:15:44.525 "qid": 0, 00:15:44.525 "state": "enabled", 00:15:44.525 "thread": "nvmf_tgt_poll_group_000", 00:15:44.525 "listen_address": { 00:15:44.525 "trtype": "TCP", 00:15:44.525 "adrfam": "IPv4", 00:15:44.525 "traddr": "10.0.0.2", 00:15:44.525 "trsvcid": "4420" 00:15:44.525 }, 00:15:44.525 "peer_address": { 00:15:44.525 "trtype": "TCP", 00:15:44.525 "adrfam": "IPv4", 00:15:44.525 "traddr": "10.0.0.1", 00:15:44.525 "trsvcid": "47294" 00:15:44.525 }, 00:15:44.525 "auth": { 00:15:44.525 "state": "completed", 00:15:44.525 "digest": "sha384", 00:15:44.525 "dhgroup": "ffdhe6144" 00:15:44.525 } 00:15:44.525 } 00:15:44.525 ]' 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.525 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.785 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.785 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.785 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.044 16:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.980 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.238 16:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.803 00:15:46.803 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.803 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.803 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.061 { 00:15:47.061 "cntlid": 83, 00:15:47.061 "qid": 0, 00:15:47.061 "state": "enabled", 00:15:47.061 "thread": "nvmf_tgt_poll_group_000", 00:15:47.061 "listen_address": { 00:15:47.061 "trtype": "TCP", 00:15:47.061 "adrfam": "IPv4", 00:15:47.061 "traddr": "10.0.0.2", 00:15:47.061 "trsvcid": "4420" 00:15:47.061 }, 00:15:47.061 "peer_address": { 00:15:47.061 "trtype": "TCP", 00:15:47.061 "adrfam": "IPv4", 00:15:47.061 "traddr": "10.0.0.1", 00:15:47.061 "trsvcid": "47322" 00:15:47.061 }, 00:15:47.061 "auth": { 00:15:47.061 "state": "completed", 00:15:47.061 "digest": "sha384", 00:15:47.061 "dhgroup": "ffdhe6144" 00:15:47.061 } 00:15:47.061 } 00:15:47.061 ]' 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.061 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.321 16:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.252 16:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.253 16:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.253 16:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.511 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.079 00:15:49.079 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.079 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.079 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.337 { 00:15:49.337 "cntlid": 85, 00:15:49.337 "qid": 0, 00:15:49.337 "state": "enabled", 00:15:49.337 "thread": "nvmf_tgt_poll_group_000", 00:15:49.337 "listen_address": { 00:15:49.337 "trtype": "TCP", 00:15:49.337 "adrfam": "IPv4", 00:15:49.337 "traddr": "10.0.0.2", 00:15:49.337 "trsvcid": "4420" 00:15:49.337 }, 00:15:49.337 "peer_address": { 00:15:49.337 "trtype": "TCP", 00:15:49.337 "adrfam": "IPv4", 00:15:49.337 "traddr": "10.0.0.1", 00:15:49.337 "trsvcid": "47344" 00:15:49.337 }, 00:15:49.337 "auth": { 00:15:49.337 "state": "completed", 00:15:49.337 "digest": "sha384", 00:15:49.337 "dhgroup": "ffdhe6144" 00:15:49.337 } 00:15:49.337 } 00:15:49.337 ]' 00:15:49.337 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.595 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.595 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.595 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.595 16:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.595 16:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.595 16:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.595 16:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.853 16:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.788 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.046 16:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.637 00:15:51.637 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.637 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.637 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.911 { 00:15:51.911 "cntlid": 87, 00:15:51.911 "qid": 0, 00:15:51.911 "state": "enabled", 00:15:51.911 "thread": "nvmf_tgt_poll_group_000", 00:15:51.911 "listen_address": { 00:15:51.911 "trtype": "TCP", 00:15:51.911 "adrfam": "IPv4", 00:15:51.911 "traddr": "10.0.0.2", 00:15:51.911 "trsvcid": "4420" 00:15:51.911 }, 00:15:51.911 "peer_address": { 00:15:51.911 "trtype": "TCP", 00:15:51.911 "adrfam": "IPv4", 00:15:51.911 "traddr": "10.0.0.1", 00:15:51.911 "trsvcid": "47378" 00:15:51.911 }, 00:15:51.911 "auth": { 00:15:51.911 "state": "completed", 00:15:51.911 "digest": "sha384", 00:15:51.911 "dhgroup": "ffdhe6144" 00:15:51.911 } 00:15:51.911 } 00:15:51.911 ]' 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.911 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.177 16:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.117 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.376 16:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.312 00:15:54.312 16:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.312 16:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.312 16:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.570 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.570 { 00:15:54.570 "cntlid": 89, 00:15:54.570 "qid": 0, 00:15:54.570 "state": "enabled", 00:15:54.570 "thread": "nvmf_tgt_poll_group_000", 00:15:54.570 "listen_address": { 00:15:54.570 "trtype": "TCP", 00:15:54.570 "adrfam": "IPv4", 00:15:54.570 "traddr": "10.0.0.2", 00:15:54.570 "trsvcid": "4420" 00:15:54.570 }, 00:15:54.571 "peer_address": { 00:15:54.571 "trtype": "TCP", 00:15:54.571 "adrfam": "IPv4", 00:15:54.571 "traddr": "10.0.0.1", 00:15:54.571 "trsvcid": "59848" 00:15:54.571 }, 00:15:54.571 "auth": { 00:15:54.571 "state": "completed", 00:15:54.571 "digest": "sha384", 00:15:54.571 "dhgroup": "ffdhe8192" 00:15:54.571 } 00:15:54.571 } 00:15:54.571 ]' 00:15:54.571 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.571 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.571 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.828 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.828 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.828 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.828 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.828 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.088 16:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:56.021 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.278 16:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.279 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.279 16:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.214 00:15:57.214 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.214 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.214 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.472 { 00:15:57.472 "cntlid": 91, 00:15:57.472 "qid": 0, 00:15:57.472 "state": "enabled", 00:15:57.472 "thread": "nvmf_tgt_poll_group_000", 00:15:57.472 "listen_address": { 00:15:57.472 "trtype": "TCP", 00:15:57.472 "adrfam": "IPv4", 00:15:57.472 "traddr": "10.0.0.2", 00:15:57.472 "trsvcid": "4420" 00:15:57.472 }, 00:15:57.472 "peer_address": { 00:15:57.472 "trtype": "TCP", 00:15:57.472 "adrfam": "IPv4", 00:15:57.472 "traddr": "10.0.0.1", 00:15:57.472 "trsvcid": "59868" 00:15:57.472 }, 00:15:57.472 "auth": { 00:15:57.472 "state": "completed", 00:15:57.472 "digest": "sha384", 00:15:57.472 "dhgroup": "ffdhe8192" 00:15:57.472 } 00:15:57.472 } 00:15:57.472 ]' 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.472 16:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.730 16:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.662 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.921 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 16:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.179 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.179 16:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.113 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.113 { 00:16:00.113 "cntlid": 93, 00:16:00.113 "qid": 0, 00:16:00.113 "state": "enabled", 00:16:00.113 "thread": "nvmf_tgt_poll_group_000", 00:16:00.113 "listen_address": { 00:16:00.113 "trtype": "TCP", 00:16:00.113 "adrfam": "IPv4", 00:16:00.113 "traddr": "10.0.0.2", 00:16:00.113 "trsvcid": "4420" 00:16:00.113 }, 00:16:00.113 "peer_address": { 00:16:00.113 "trtype": "TCP", 00:16:00.113 "adrfam": "IPv4", 00:16:00.113 "traddr": "10.0.0.1", 00:16:00.113 "trsvcid": "59908" 00:16:00.113 }, 00:16:00.113 "auth": { 00:16:00.113 "state": "completed", 00:16:00.113 "digest": "sha384", 00:16:00.113 "dhgroup": "ffdhe8192" 00:16:00.113 } 00:16:00.113 } 00:16:00.113 ]' 00:16:00.113 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.371 16:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.629 16:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:01.565 16:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.565 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.823 16:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.760 00:16:02.760 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.760 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.760 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.017 { 00:16:03.017 "cntlid": 95, 00:16:03.017 "qid": 0, 00:16:03.017 "state": "enabled", 00:16:03.017 "thread": "nvmf_tgt_poll_group_000", 00:16:03.017 "listen_address": { 00:16:03.017 "trtype": "TCP", 00:16:03.017 "adrfam": "IPv4", 00:16:03.017 "traddr": "10.0.0.2", 00:16:03.017 "trsvcid": "4420" 00:16:03.017 }, 00:16:03.017 "peer_address": { 00:16:03.017 "trtype": "TCP", 00:16:03.017 "adrfam": "IPv4", 00:16:03.017 "traddr": "10.0.0.1", 00:16:03.017 "trsvcid": "56296" 00:16:03.017 }, 00:16:03.017 "auth": { 00:16:03.017 "state": "completed", 00:16:03.017 "digest": "sha384", 00:16:03.017 "dhgroup": "ffdhe8192" 00:16:03.017 } 00:16:03.017 } 00:16:03.017 ]' 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.017 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.018 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.276 16:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:04.210 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.210 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.210 16:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.210 16:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.468 16:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.728 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.986 00:16:04.986 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.986 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.986 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.243 { 00:16:05.243 "cntlid": 97, 00:16:05.243 "qid": 0, 00:16:05.243 "state": "enabled", 00:16:05.243 "thread": "nvmf_tgt_poll_group_000", 00:16:05.243 "listen_address": { 00:16:05.243 "trtype": "TCP", 00:16:05.243 "adrfam": "IPv4", 00:16:05.243 "traddr": "10.0.0.2", 00:16:05.243 "trsvcid": "4420" 00:16:05.243 }, 00:16:05.243 "peer_address": { 00:16:05.243 "trtype": "TCP", 00:16:05.243 "adrfam": "IPv4", 00:16:05.243 "traddr": "10.0.0.1", 00:16:05.243 "trsvcid": "56332" 00:16:05.243 }, 00:16:05.243 "auth": { 00:16:05.243 "state": "completed", 00:16:05.243 "digest": "sha512", 00:16:05.243 "dhgroup": "null" 00:16:05.243 } 00:16:05.243 } 00:16:05.243 ]' 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.243 16:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.500 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.470 16:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.727 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.295 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.295 { 00:16:07.295 "cntlid": 99, 00:16:07.295 "qid": 0, 00:16:07.295 "state": "enabled", 00:16:07.295 "thread": "nvmf_tgt_poll_group_000", 00:16:07.295 "listen_address": { 00:16:07.295 "trtype": "TCP", 00:16:07.295 "adrfam": "IPv4", 00:16:07.295 "traddr": "10.0.0.2", 00:16:07.295 "trsvcid": "4420" 00:16:07.295 }, 00:16:07.295 "peer_address": { 00:16:07.295 "trtype": "TCP", 00:16:07.295 "adrfam": "IPv4", 00:16:07.295 "traddr": "10.0.0.1", 00:16:07.295 "trsvcid": "56362" 00:16:07.295 }, 00:16:07.295 "auth": { 00:16:07.295 "state": "completed", 00:16:07.295 "digest": "sha512", 00:16:07.295 "dhgroup": "null" 00:16:07.295 } 00:16:07.295 } 00:16:07.295 ]' 00:16:07.295 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.553 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.553 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.553 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.553 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.553 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.554 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.554 16:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.812 16:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.750 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.008 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.575 00:16:09.575 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.575 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.575 16:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.575 { 00:16:09.575 "cntlid": 101, 00:16:09.575 "qid": 0, 00:16:09.575 "state": "enabled", 00:16:09.575 "thread": "nvmf_tgt_poll_group_000", 00:16:09.575 "listen_address": { 00:16:09.575 "trtype": "TCP", 00:16:09.575 "adrfam": "IPv4", 00:16:09.575 "traddr": "10.0.0.2", 00:16:09.575 "trsvcid": "4420" 00:16:09.575 }, 00:16:09.575 "peer_address": { 00:16:09.575 "trtype": "TCP", 00:16:09.575 "adrfam": "IPv4", 00:16:09.575 "traddr": "10.0.0.1", 00:16:09.575 "trsvcid": "56382" 00:16:09.575 }, 00:16:09.575 "auth": { 00:16:09.575 "state": "completed", 00:16:09.575 "digest": "sha512", 00:16:09.575 "dhgroup": "null" 00:16:09.575 } 00:16:09.575 } 00:16:09.575 ]' 00:16:09.575 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.833 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.092 16:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.043 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.301 16:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.560 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.819 16:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.078 { 00:16:12.078 "cntlid": 103, 00:16:12.078 "qid": 0, 00:16:12.078 "state": "enabled", 00:16:12.078 "thread": "nvmf_tgt_poll_group_000", 00:16:12.078 "listen_address": { 00:16:12.078 "trtype": "TCP", 00:16:12.078 "adrfam": "IPv4", 00:16:12.078 "traddr": "10.0.0.2", 00:16:12.078 "trsvcid": "4420" 00:16:12.078 }, 00:16:12.078 "peer_address": { 00:16:12.078 "trtype": "TCP", 00:16:12.078 "adrfam": "IPv4", 00:16:12.078 "traddr": "10.0.0.1", 00:16:12.078 "trsvcid": "58772" 00:16:12.078 }, 00:16:12.078 "auth": { 00:16:12.078 "state": "completed", 00:16:12.078 "digest": "sha512", 00:16:12.078 "dhgroup": "null" 00:16:12.078 } 00:16:12.078 } 00:16:12.078 ]' 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.078 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.336 16:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.273 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.274 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.274 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.274 16:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.531 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.805 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.069 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.069 { 00:16:14.069 "cntlid": 105, 00:16:14.069 "qid": 0, 00:16:14.069 "state": "enabled", 00:16:14.069 "thread": "nvmf_tgt_poll_group_000", 00:16:14.069 "listen_address": { 00:16:14.069 "trtype": "TCP", 00:16:14.069 "adrfam": "IPv4", 00:16:14.069 "traddr": "10.0.0.2", 00:16:14.069 "trsvcid": "4420" 00:16:14.069 }, 00:16:14.069 "peer_address": { 00:16:14.069 "trtype": "TCP", 00:16:14.069 "adrfam": "IPv4", 00:16:14.069 "traddr": "10.0.0.1", 00:16:14.069 "trsvcid": "58794" 00:16:14.069 }, 00:16:14.069 "auth": { 00:16:14.069 "state": "completed", 00:16:14.069 "digest": "sha512", 00:16:14.069 "dhgroup": "ffdhe2048" 00:16:14.069 } 00:16:14.069 } 00:16:14.069 ]' 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.327 16:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.584 16:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.515 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.772 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.336 00:16:16.336 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.336 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.336 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.593 { 00:16:16.593 "cntlid": 107, 00:16:16.593 "qid": 0, 00:16:16.593 "state": "enabled", 00:16:16.593 "thread": "nvmf_tgt_poll_group_000", 00:16:16.593 "listen_address": { 00:16:16.593 "trtype": "TCP", 00:16:16.593 "adrfam": "IPv4", 00:16:16.593 "traddr": "10.0.0.2", 00:16:16.593 "trsvcid": "4420" 00:16:16.593 }, 00:16:16.593 "peer_address": { 00:16:16.593 "trtype": "TCP", 00:16:16.593 "adrfam": "IPv4", 00:16:16.593 "traddr": "10.0.0.1", 00:16:16.593 "trsvcid": "58824" 00:16:16.593 }, 00:16:16.593 "auth": { 00:16:16.593 "state": "completed", 00:16:16.593 "digest": "sha512", 00:16:16.593 "dhgroup": "ffdhe2048" 00:16:16.593 } 00:16:16.593 } 00:16:16.593 ]' 00:16:16.593 16:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.593 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.850 16:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.786 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.787 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.787 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.045 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.612 00:16:18.612 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.612 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.612 16:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.612 { 00:16:18.612 "cntlid": 109, 00:16:18.612 "qid": 0, 00:16:18.612 "state": "enabled", 00:16:18.612 "thread": "nvmf_tgt_poll_group_000", 00:16:18.612 "listen_address": { 00:16:18.612 "trtype": "TCP", 00:16:18.612 "adrfam": "IPv4", 00:16:18.612 "traddr": "10.0.0.2", 00:16:18.612 "trsvcid": "4420" 00:16:18.612 }, 00:16:18.612 "peer_address": { 00:16:18.612 "trtype": "TCP", 00:16:18.612 "adrfam": "IPv4", 00:16:18.612 "traddr": "10.0.0.1", 00:16:18.612 "trsvcid": "58848" 00:16:18.612 }, 00:16:18.612 "auth": { 00:16:18.612 "state": "completed", 00:16:18.612 "digest": "sha512", 00:16:18.612 "dhgroup": "ffdhe2048" 00:16:18.612 } 00:16:18.612 } 00:16:18.612 ]' 00:16:18.612 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.870 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.128 16:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.099 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.357 16:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.614 00:16:20.614 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.614 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.614 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.873 { 00:16:20.873 "cntlid": 111, 00:16:20.873 "qid": 0, 00:16:20.873 "state": "enabled", 00:16:20.873 "thread": "nvmf_tgt_poll_group_000", 00:16:20.873 "listen_address": { 00:16:20.873 "trtype": "TCP", 00:16:20.873 "adrfam": "IPv4", 00:16:20.873 "traddr": "10.0.0.2", 00:16:20.873 "trsvcid": "4420" 00:16:20.873 }, 00:16:20.873 "peer_address": { 00:16:20.873 "trtype": "TCP", 00:16:20.873 "adrfam": "IPv4", 00:16:20.873 "traddr": "10.0.0.1", 00:16:20.873 "trsvcid": "58876" 00:16:20.873 }, 00:16:20.873 "auth": { 00:16:20.873 "state": "completed", 00:16:20.873 "digest": "sha512", 00:16:20.873 "dhgroup": "ffdhe2048" 00:16:20.873 } 00:16:20.873 } 00:16:20.873 ]' 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.873 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.131 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.131 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.131 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.131 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.389 16:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.323 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.582 16:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.840 00:16:22.840 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.840 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.840 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.098 { 00:16:23.098 "cntlid": 113, 00:16:23.098 "qid": 0, 00:16:23.098 "state": "enabled", 00:16:23.098 "thread": "nvmf_tgt_poll_group_000", 00:16:23.098 "listen_address": { 00:16:23.098 "trtype": "TCP", 00:16:23.098 "adrfam": "IPv4", 00:16:23.098 "traddr": "10.0.0.2", 00:16:23.098 "trsvcid": "4420" 00:16:23.098 }, 00:16:23.098 "peer_address": { 00:16:23.098 "trtype": "TCP", 00:16:23.098 "adrfam": "IPv4", 00:16:23.098 "traddr": "10.0.0.1", 00:16:23.098 "trsvcid": "54798" 00:16:23.098 }, 00:16:23.098 "auth": { 00:16:23.098 "state": "completed", 00:16:23.098 "digest": "sha512", 00:16:23.098 "dhgroup": "ffdhe3072" 00:16:23.098 } 00:16:23.098 } 00:16:23.098 ]' 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.098 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.357 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.357 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.357 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.617 16:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.550 16:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.808 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.066 00:16:25.066 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.066 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.066 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.324 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.324 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.325 { 00:16:25.325 "cntlid": 115, 00:16:25.325 "qid": 0, 00:16:25.325 "state": "enabled", 00:16:25.325 "thread": "nvmf_tgt_poll_group_000", 00:16:25.325 "listen_address": { 00:16:25.325 "trtype": "TCP", 00:16:25.325 "adrfam": "IPv4", 00:16:25.325 "traddr": "10.0.0.2", 00:16:25.325 "trsvcid": "4420" 00:16:25.325 }, 00:16:25.325 "peer_address": { 00:16:25.325 "trtype": "TCP", 00:16:25.325 "adrfam": "IPv4", 00:16:25.325 "traddr": "10.0.0.1", 00:16:25.325 "trsvcid": "54826" 00:16:25.325 }, 00:16:25.325 "auth": { 00:16:25.325 "state": "completed", 00:16:25.325 "digest": "sha512", 00:16:25.325 "dhgroup": "ffdhe3072" 00:16:25.325 } 00:16:25.325 } 00:16:25.325 ]' 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.325 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.584 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.584 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.584 16:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.842 16:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:26.780 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.781 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.039 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.298 00:16:27.298 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.298 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.298 16:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.555 { 00:16:27.555 "cntlid": 117, 00:16:27.555 "qid": 0, 00:16:27.555 "state": "enabled", 00:16:27.555 "thread": "nvmf_tgt_poll_group_000", 00:16:27.555 "listen_address": { 00:16:27.555 "trtype": "TCP", 00:16:27.555 "adrfam": "IPv4", 00:16:27.555 "traddr": "10.0.0.2", 00:16:27.555 "trsvcid": "4420" 00:16:27.555 }, 00:16:27.555 "peer_address": { 00:16:27.555 "trtype": "TCP", 00:16:27.555 "adrfam": "IPv4", 00:16:27.555 "traddr": "10.0.0.1", 00:16:27.555 "trsvcid": "54844" 00:16:27.555 }, 00:16:27.555 "auth": { 00:16:27.555 "state": "completed", 00:16:27.555 "digest": "sha512", 00:16:27.555 "dhgroup": "ffdhe3072" 00:16:27.555 } 00:16:27.555 } 00:16:27.555 ]' 00:16:27.555 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.813 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.070 16:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:29.002 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.002 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.003 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.260 16:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.518 00:16:29.518 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.518 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.518 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.775 { 00:16:29.775 "cntlid": 119, 00:16:29.775 "qid": 0, 00:16:29.775 "state": "enabled", 00:16:29.775 "thread": "nvmf_tgt_poll_group_000", 00:16:29.775 "listen_address": { 00:16:29.775 "trtype": "TCP", 00:16:29.775 "adrfam": "IPv4", 00:16:29.775 "traddr": "10.0.0.2", 00:16:29.775 "trsvcid": "4420" 00:16:29.775 }, 00:16:29.775 "peer_address": { 00:16:29.775 "trtype": "TCP", 00:16:29.775 "adrfam": "IPv4", 00:16:29.775 "traddr": "10.0.0.1", 00:16:29.775 "trsvcid": "54878" 00:16:29.775 }, 00:16:29.775 "auth": { 00:16:29.775 "state": "completed", 00:16:29.775 "digest": "sha512", 00:16:29.775 "dhgroup": "ffdhe3072" 00:16:29.775 } 00:16:29.775 } 00:16:29.775 ]' 00:16:29.775 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.033 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.290 16:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.224 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.481 16:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.738 00:16:31.738 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.738 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.738 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.996 { 00:16:31.996 "cntlid": 121, 00:16:31.996 "qid": 0, 00:16:31.996 "state": "enabled", 00:16:31.996 "thread": "nvmf_tgt_poll_group_000", 00:16:31.996 "listen_address": { 00:16:31.996 "trtype": "TCP", 00:16:31.996 "adrfam": "IPv4", 00:16:31.996 "traddr": "10.0.0.2", 00:16:31.996 "trsvcid": "4420" 00:16:31.996 }, 00:16:31.996 "peer_address": { 00:16:31.996 "trtype": "TCP", 00:16:31.996 "adrfam": "IPv4", 00:16:31.996 "traddr": "10.0.0.1", 00:16:31.996 "trsvcid": "46124" 00:16:31.996 }, 00:16:31.996 "auth": { 00:16:31.996 "state": "completed", 00:16:31.996 "digest": "sha512", 00:16:31.996 "dhgroup": "ffdhe4096" 00:16:31.996 } 00:16:31.996 } 00:16:31.996 ]' 00:16:31.996 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.254 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.512 16:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.460 16:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.718 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:33.718 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.718 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.718 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.719 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.979 00:16:33.979 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.979 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.979 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.544 { 00:16:34.544 "cntlid": 123, 00:16:34.544 "qid": 0, 00:16:34.544 "state": "enabled", 00:16:34.544 "thread": "nvmf_tgt_poll_group_000", 00:16:34.544 "listen_address": { 00:16:34.544 "trtype": "TCP", 00:16:34.544 "adrfam": "IPv4", 00:16:34.544 "traddr": "10.0.0.2", 00:16:34.544 "trsvcid": "4420" 00:16:34.544 }, 00:16:34.544 "peer_address": { 00:16:34.544 "trtype": "TCP", 00:16:34.544 "adrfam": "IPv4", 00:16:34.544 "traddr": "10.0.0.1", 00:16:34.544 "trsvcid": "46152" 00:16:34.544 }, 00:16:34.544 "auth": { 00:16:34.544 "state": "completed", 00:16:34.544 "digest": "sha512", 00:16:34.544 "dhgroup": "ffdhe4096" 00:16:34.544 } 00:16:34.544 } 00:16:34.544 ]' 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.544 16:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.803 16:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.738 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.996 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.252 00:16:36.252 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.252 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.252 16:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.509 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.509 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.509 16:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.510 16:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.510 16:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.510 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.510 { 00:16:36.510 "cntlid": 125, 00:16:36.510 "qid": 0, 00:16:36.510 "state": "enabled", 00:16:36.510 "thread": "nvmf_tgt_poll_group_000", 00:16:36.510 "listen_address": { 00:16:36.510 "trtype": "TCP", 00:16:36.510 "adrfam": "IPv4", 00:16:36.510 "traddr": "10.0.0.2", 00:16:36.510 "trsvcid": "4420" 00:16:36.510 }, 00:16:36.510 "peer_address": { 00:16:36.510 "trtype": "TCP", 00:16:36.510 "adrfam": "IPv4", 00:16:36.510 "traddr": "10.0.0.1", 00:16:36.510 "trsvcid": "46168" 00:16:36.510 }, 00:16:36.510 "auth": { 00:16:36.510 "state": "completed", 00:16:36.510 "digest": "sha512", 00:16:36.510 "dhgroup": "ffdhe4096" 00:16:36.510 } 00:16:36.510 } 00:16:36.510 ]' 00:16:36.510 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.768 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.026 16:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.960 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.218 16:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.476 00:16:38.476 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.476 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.476 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.733 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.733 { 00:16:38.733 "cntlid": 127, 00:16:38.733 "qid": 0, 00:16:38.733 "state": "enabled", 00:16:38.733 "thread": "nvmf_tgt_poll_group_000", 00:16:38.733 "listen_address": { 00:16:38.733 "trtype": "TCP", 00:16:38.733 "adrfam": "IPv4", 00:16:38.733 "traddr": "10.0.0.2", 00:16:38.733 "trsvcid": "4420" 00:16:38.733 }, 00:16:38.733 "peer_address": { 00:16:38.733 "trtype": "TCP", 00:16:38.734 "adrfam": "IPv4", 00:16:38.734 "traddr": "10.0.0.1", 00:16:38.734 "trsvcid": "46192" 00:16:38.734 }, 00:16:38.734 "auth": { 00:16:38.734 "state": "completed", 00:16:38.734 "digest": "sha512", 00:16:38.734 "dhgroup": "ffdhe4096" 00:16:38.734 } 00:16:38.734 } 00:16:38.734 ]' 00:16:38.734 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.992 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.250 16:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.189 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.447 16:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.012 00:16:41.012 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.012 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.012 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.269 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.270 { 00:16:41.270 "cntlid": 129, 00:16:41.270 "qid": 0, 00:16:41.270 "state": "enabled", 00:16:41.270 "thread": "nvmf_tgt_poll_group_000", 00:16:41.270 "listen_address": { 00:16:41.270 "trtype": "TCP", 00:16:41.270 "adrfam": "IPv4", 00:16:41.270 "traddr": "10.0.0.2", 00:16:41.270 "trsvcid": "4420" 00:16:41.270 }, 00:16:41.270 "peer_address": { 00:16:41.270 "trtype": "TCP", 00:16:41.270 "adrfam": "IPv4", 00:16:41.270 "traddr": "10.0.0.1", 00:16:41.270 "trsvcid": "46216" 00:16:41.270 }, 00:16:41.270 "auth": { 00:16:41.270 "state": "completed", 00:16:41.270 "digest": "sha512", 00:16:41.270 "dhgroup": "ffdhe6144" 00:16:41.270 } 00:16:41.270 } 00:16:41.270 ]' 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.270 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.528 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.528 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.528 16:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.787 16:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.720 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.978 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.545 00:16:43.545 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.545 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.545 16:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.802 { 00:16:43.802 "cntlid": 131, 00:16:43.802 "qid": 0, 00:16:43.802 "state": "enabled", 00:16:43.802 "thread": "nvmf_tgt_poll_group_000", 00:16:43.802 "listen_address": { 00:16:43.802 "trtype": "TCP", 00:16:43.802 "adrfam": "IPv4", 00:16:43.802 "traddr": "10.0.0.2", 00:16:43.802 "trsvcid": "4420" 00:16:43.802 }, 00:16:43.802 "peer_address": { 00:16:43.802 "trtype": "TCP", 00:16:43.802 "adrfam": "IPv4", 00:16:43.802 "traddr": "10.0.0.1", 00:16:43.802 "trsvcid": "47856" 00:16:43.802 }, 00:16:43.802 "auth": { 00:16:43.802 "state": "completed", 00:16:43.802 "digest": "sha512", 00:16:43.802 "dhgroup": "ffdhe6144" 00:16:43.802 } 00:16:43.802 } 00:16:43.802 ]' 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.802 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.059 16:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.996 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.255 16:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.820 00:16:45.820 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.820 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.820 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.079 { 00:16:46.079 "cntlid": 133, 00:16:46.079 "qid": 0, 00:16:46.079 "state": "enabled", 00:16:46.079 "thread": "nvmf_tgt_poll_group_000", 00:16:46.079 "listen_address": { 00:16:46.079 "trtype": "TCP", 00:16:46.079 "adrfam": "IPv4", 00:16:46.079 "traddr": "10.0.0.2", 00:16:46.079 "trsvcid": "4420" 00:16:46.079 }, 00:16:46.079 "peer_address": { 00:16:46.079 "trtype": "TCP", 00:16:46.079 "adrfam": "IPv4", 00:16:46.079 "traddr": "10.0.0.1", 00:16:46.079 "trsvcid": "47874" 00:16:46.079 }, 00:16:46.079 "auth": { 00:16:46.079 "state": "completed", 00:16:46.079 "digest": "sha512", 00:16:46.079 "dhgroup": "ffdhe6144" 00:16:46.079 } 00:16:46.079 } 00:16:46.079 ]' 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.079 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.337 16:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.311 16:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.571 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.505 00:16:48.505 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.505 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.505 16:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.505 { 00:16:48.505 "cntlid": 135, 00:16:48.505 "qid": 0, 00:16:48.505 "state": "enabled", 00:16:48.505 "thread": "nvmf_tgt_poll_group_000", 00:16:48.505 "listen_address": { 00:16:48.505 "trtype": "TCP", 00:16:48.505 "adrfam": "IPv4", 00:16:48.505 "traddr": "10.0.0.2", 00:16:48.505 "trsvcid": "4420" 00:16:48.505 }, 00:16:48.505 "peer_address": { 00:16:48.505 "trtype": "TCP", 00:16:48.505 "adrfam": "IPv4", 00:16:48.505 "traddr": "10.0.0.1", 00:16:48.505 "trsvcid": "47886" 00:16:48.505 }, 00:16:48.505 "auth": { 00:16:48.505 "state": "completed", 00:16:48.505 "digest": "sha512", 00:16:48.505 "dhgroup": "ffdhe6144" 00:16:48.505 } 00:16:48.505 } 00:16:48.505 ]' 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.505 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.762 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.762 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.762 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.020 16:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.952 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.209 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.210 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.210 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.210 16:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.210 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.210 16:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.143 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.143 16:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.401 { 00:16:51.401 "cntlid": 137, 00:16:51.401 "qid": 0, 00:16:51.401 "state": "enabled", 00:16:51.401 "thread": "nvmf_tgt_poll_group_000", 00:16:51.401 "listen_address": { 00:16:51.401 "trtype": "TCP", 00:16:51.401 "adrfam": "IPv4", 00:16:51.401 "traddr": "10.0.0.2", 00:16:51.401 "trsvcid": "4420" 00:16:51.401 }, 00:16:51.401 "peer_address": { 00:16:51.401 "trtype": "TCP", 00:16:51.401 "adrfam": "IPv4", 00:16:51.401 "traddr": "10.0.0.1", 00:16:51.401 "trsvcid": "47904" 00:16:51.401 }, 00:16:51.401 "auth": { 00:16:51.401 "state": "completed", 00:16:51.401 "digest": "sha512", 00:16:51.401 "dhgroup": "ffdhe8192" 00:16:51.401 } 00:16:51.401 } 00:16:51.401 ]' 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.401 16:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.659 16:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.594 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.852 16:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.803 00:16:53.803 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.803 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.803 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.061 { 00:16:54.061 "cntlid": 139, 00:16:54.061 "qid": 0, 00:16:54.061 "state": "enabled", 00:16:54.061 "thread": "nvmf_tgt_poll_group_000", 00:16:54.061 "listen_address": { 00:16:54.061 "trtype": "TCP", 00:16:54.061 "adrfam": "IPv4", 00:16:54.061 "traddr": "10.0.0.2", 00:16:54.061 "trsvcid": "4420" 00:16:54.061 }, 00:16:54.061 "peer_address": { 00:16:54.061 "trtype": "TCP", 00:16:54.061 "adrfam": "IPv4", 00:16:54.061 "traddr": "10.0.0.1", 00:16:54.061 "trsvcid": "34892" 00:16:54.061 }, 00:16:54.061 "auth": { 00:16:54.061 "state": "completed", 00:16:54.061 "digest": "sha512", 00:16:54.061 "dhgroup": "ffdhe8192" 00:16:54.061 } 00:16:54.061 } 00:16:54.061 ]' 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.061 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.319 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.319 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.319 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.577 16:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDU5YmJmNDQ5NDE2Y2I2ZTYzY2ZiMWMzMjYyNTJmNGaiwsNb: --dhchap-ctrl-secret DHHC-1:02:MGM1MzI5Y2E5NTIxMTk4NzE1Y2U3Njg5OTE2ZGJlMzVlZTk0ZWU1YWFjMmQ4YWNklaaTrA==: 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.513 16:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.771 16:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.710 00:16:56.710 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.710 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.710 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.968 { 00:16:56.968 "cntlid": 141, 00:16:56.968 "qid": 0, 00:16:56.968 "state": "enabled", 00:16:56.968 "thread": "nvmf_tgt_poll_group_000", 00:16:56.968 "listen_address": { 00:16:56.968 "trtype": "TCP", 00:16:56.968 "adrfam": "IPv4", 00:16:56.968 "traddr": "10.0.0.2", 00:16:56.968 "trsvcid": "4420" 00:16:56.968 }, 00:16:56.968 "peer_address": { 00:16:56.968 "trtype": "TCP", 00:16:56.968 "adrfam": "IPv4", 00:16:56.968 "traddr": "10.0.0.1", 00:16:56.968 "trsvcid": "34928" 00:16:56.968 }, 00:16:56.968 "auth": { 00:16:56.968 "state": "completed", 00:16:56.968 "digest": "sha512", 00:16:56.968 "dhgroup": "ffdhe8192" 00:16:56.968 } 00:16:56.968 } 00:16:56.968 ]' 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.968 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.227 16:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDdmNWYyMGI3Njc1MDUyZmQ0MDcwMjNlYWVlZjEzMzRhODg1Njc2ZDczMjM0YTNl5rwsvQ==: --dhchap-ctrl-secret DHHC-1:01:ZDA0MWE1OTFjYThhNjlmNGJiNGQxNzg0OTU5OGM0MTU53fdY: 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.163 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.421 16:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.358 00:16:59.358 16:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.358 16:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.358 16:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.616 { 00:16:59.616 "cntlid": 143, 00:16:59.616 "qid": 0, 00:16:59.616 "state": "enabled", 00:16:59.616 "thread": "nvmf_tgt_poll_group_000", 00:16:59.616 "listen_address": { 00:16:59.616 "trtype": "TCP", 00:16:59.616 "adrfam": "IPv4", 00:16:59.616 "traddr": "10.0.0.2", 00:16:59.616 "trsvcid": "4420" 00:16:59.616 }, 00:16:59.616 "peer_address": { 00:16:59.616 "trtype": "TCP", 00:16:59.616 "adrfam": "IPv4", 00:16:59.616 "traddr": "10.0.0.1", 00:16:59.616 "trsvcid": "34954" 00:16:59.616 }, 00:16:59.616 "auth": { 00:16:59.616 "state": "completed", 00:16:59.616 "digest": "sha512", 00:16:59.616 "dhgroup": "ffdhe8192" 00:16:59.616 } 00:16:59.616 } 00:16:59.616 ]' 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.616 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.875 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.875 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.875 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.875 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.875 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.133 16:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.068 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.377 16:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.338 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.338 16:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.596 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.596 { 00:17:02.596 "cntlid": 145, 00:17:02.596 "qid": 0, 00:17:02.596 "state": "enabled", 00:17:02.596 "thread": "nvmf_tgt_poll_group_000", 00:17:02.596 "listen_address": { 00:17:02.596 "trtype": "TCP", 00:17:02.596 "adrfam": "IPv4", 00:17:02.596 "traddr": "10.0.0.2", 00:17:02.596 "trsvcid": "4420" 00:17:02.596 }, 00:17:02.596 "peer_address": { 00:17:02.596 "trtype": "TCP", 00:17:02.596 "adrfam": "IPv4", 00:17:02.596 "traddr": "10.0.0.1", 00:17:02.596 "trsvcid": "34980" 00:17:02.596 }, 00:17:02.596 "auth": { 00:17:02.596 "state": "completed", 00:17:02.596 "digest": "sha512", 00:17:02.596 "dhgroup": "ffdhe8192" 00:17:02.596 } 00:17:02.596 } 00:17:02.596 ]' 00:17:02.596 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.596 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.596 16:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.596 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.596 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.596 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.596 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.596 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.854 16:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjRkZWQzNmQ4YTEzNDNjYWE0NGFjNWVkNjU1YTFiOTE3NmZmMTdkYmExZTNkZmZh4cvPkg==: --dhchap-ctrl-secret DHHC-1:03:M2U3ZDJhNjkxYWNlNjIzNmY4ZDZkMjRlNmQ0MmJjZjk5YzA4Mzc5NDI2MjM2YjY1NjE0MTY5NWQ2YTNlZjEzMFXe5s8=: 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.786 16:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.722 request: 00:17:04.722 { 00:17:04.722 "name": "nvme0", 00:17:04.722 "trtype": "tcp", 00:17:04.722 "traddr": "10.0.0.2", 00:17:04.722 "adrfam": "ipv4", 00:17:04.722 "trsvcid": "4420", 00:17:04.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:04.722 "prchk_reftag": false, 00:17:04.722 "prchk_guard": false, 00:17:04.722 "hdgst": false, 00:17:04.722 "ddgst": false, 00:17:04.722 "dhchap_key": "key2", 00:17:04.722 "method": "bdev_nvme_attach_controller", 00:17:04.722 "req_id": 1 00:17:04.722 } 00:17:04.722 Got JSON-RPC error response 00:17:04.722 response: 00:17:04.722 { 00:17:04.722 "code": -5, 00:17:04.722 "message": "Input/output error" 00:17:04.722 } 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.722 16:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.658 request: 00:17:05.658 { 00:17:05.658 "name": "nvme0", 00:17:05.658 "trtype": "tcp", 00:17:05.658 "traddr": "10.0.0.2", 00:17:05.658 "adrfam": "ipv4", 00:17:05.658 "trsvcid": "4420", 00:17:05.658 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:05.658 "prchk_reftag": false, 00:17:05.658 "prchk_guard": false, 00:17:05.658 "hdgst": false, 00:17:05.658 "ddgst": false, 00:17:05.658 "dhchap_key": "key1", 00:17:05.658 "dhchap_ctrlr_key": "ckey2", 00:17:05.658 "method": "bdev_nvme_attach_controller", 00:17:05.658 "req_id": 1 00:17:05.658 } 00:17:05.658 Got JSON-RPC error response 00:17:05.658 response: 00:17:05.658 { 00:17:05.658 "code": -5, 00:17:05.658 "message": "Input/output error" 00:17:05.658 } 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.658 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.599 request: 00:17:06.599 { 00:17:06.599 "name": "nvme0", 00:17:06.599 "trtype": "tcp", 00:17:06.599 "traddr": "10.0.0.2", 00:17:06.599 "adrfam": "ipv4", 00:17:06.599 "trsvcid": "4420", 00:17:06.599 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.599 "prchk_reftag": false, 00:17:06.599 "prchk_guard": false, 00:17:06.599 "hdgst": false, 00:17:06.599 "ddgst": false, 00:17:06.599 "dhchap_key": "key1", 00:17:06.599 "dhchap_ctrlr_key": "ckey1", 00:17:06.599 "method": "bdev_nvme_attach_controller", 00:17:06.599 "req_id": 1 00:17:06.599 } 00:17:06.599 Got JSON-RPC error response 00:17:06.599 response: 00:17:06.599 { 00:17:06.599 "code": -5, 00:17:06.599 "message": "Input/output error" 00:17:06.599 } 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1506438 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1506438 ']' 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1506438 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506438 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506438' 00:17:06.599 killing process with pid 1506438 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1506438 00:17:06.599 16:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1506438 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1529144 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1529144 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1529144 ']' 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.859 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1529144 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1529144 ']' 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.117 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.375 16:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.310 00:17:08.310 16:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.310 16:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.310 16:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.568 { 00:17:08.568 "cntlid": 1, 00:17:08.568 "qid": 0, 00:17:08.568 "state": "enabled", 00:17:08.568 "thread": "nvmf_tgt_poll_group_000", 00:17:08.568 "listen_address": { 00:17:08.568 "trtype": "TCP", 00:17:08.568 "adrfam": "IPv4", 00:17:08.568 "traddr": "10.0.0.2", 00:17:08.568 "trsvcid": "4420" 00:17:08.568 }, 00:17:08.568 "peer_address": { 00:17:08.568 "trtype": "TCP", 00:17:08.568 "adrfam": "IPv4", 00:17:08.568 "traddr": "10.0.0.1", 00:17:08.568 "trsvcid": "47802" 00:17:08.568 }, 00:17:08.568 "auth": { 00:17:08.568 "state": "completed", 00:17:08.568 "digest": "sha512", 00:17:08.568 "dhgroup": "ffdhe8192" 00:17:08.568 } 00:17:08.568 } 00:17:08.568 ]' 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.568 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.828 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.829 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.829 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.087 16:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzM3Njc3MzlkMjEzOTUwNjVlMjc3OTZkNTRlMDEyYWMzOTVjYTFlYTYzYTYyMGI2Y2FhNzM0Mzg4NDlhNGQ3Y62DaaU=: 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:10.018 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.275 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.533 request: 00:17:10.533 { 00:17:10.533 "name": "nvme0", 00:17:10.533 "trtype": "tcp", 00:17:10.533 "traddr": "10.0.0.2", 00:17:10.533 "adrfam": "ipv4", 00:17:10.533 "trsvcid": "4420", 00:17:10.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.533 "prchk_reftag": false, 00:17:10.533 "prchk_guard": false, 00:17:10.533 "hdgst": false, 00:17:10.533 "ddgst": false, 00:17:10.533 "dhchap_key": "key3", 00:17:10.533 "method": "bdev_nvme_attach_controller", 00:17:10.533 "req_id": 1 00:17:10.533 } 00:17:10.533 Got JSON-RPC error response 00:17:10.533 response: 00:17:10.533 { 00:17:10.533 "code": -5, 00:17:10.533 "message": "Input/output error" 00:17:10.533 } 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.533 16:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.789 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.051 request: 00:17:11.051 { 00:17:11.051 "name": "nvme0", 00:17:11.051 "trtype": "tcp", 00:17:11.051 "traddr": "10.0.0.2", 00:17:11.051 "adrfam": "ipv4", 00:17:11.051 "trsvcid": "4420", 00:17:11.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.051 "prchk_reftag": false, 00:17:11.051 "prchk_guard": false, 00:17:11.051 "hdgst": false, 00:17:11.051 "ddgst": false, 00:17:11.051 "dhchap_key": "key3", 00:17:11.051 "method": "bdev_nvme_attach_controller", 00:17:11.051 "req_id": 1 00:17:11.051 } 00:17:11.051 Got JSON-RPC error response 00:17:11.051 response: 00:17:11.051 { 00:17:11.051 "code": -5, 00:17:11.051 "message": "Input/output error" 00:17:11.051 } 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.052 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:11.314 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.315 16:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.315 16:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.571 request: 00:17:11.571 { 00:17:11.571 "name": "nvme0", 00:17:11.571 "trtype": "tcp", 00:17:11.571 "traddr": "10.0.0.2", 00:17:11.571 "adrfam": "ipv4", 00:17:11.571 "trsvcid": "4420", 00:17:11.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.571 "prchk_reftag": false, 00:17:11.571 "prchk_guard": false, 00:17:11.571 "hdgst": false, 00:17:11.571 "ddgst": false, 00:17:11.571 "dhchap_key": "key0", 00:17:11.571 "dhchap_ctrlr_key": "key1", 00:17:11.571 "method": "bdev_nvme_attach_controller", 00:17:11.571 "req_id": 1 00:17:11.571 } 00:17:11.571 Got JSON-RPC error response 00:17:11.571 response: 00:17:11.571 { 00:17:11.571 "code": -5, 00:17:11.571 "message": "Input/output error" 00:17:11.571 } 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.571 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.827 00:17:11.827 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:11.827 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:11.827 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.084 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.084 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.084 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1506537 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1506537 ']' 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1506537 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1506537 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1506537' 00:17:12.341 killing process with pid 1506537 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1506537 00:17:12.341 16:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1506537 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.907 rmmod nvme_tcp 00:17:12.907 rmmod nvme_fabrics 00:17:12.907 rmmod nvme_keyring 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1529144 ']' 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1529144 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1529144 ']' 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1529144 00:17:12.907 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1529144 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1529144' 00:17:12.908 killing process with pid 1529144 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1529144 00:17:12.908 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1529144 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.166 16:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.702 16:33:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.702 16:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.V1s /tmp/spdk.key-sha256.R5F /tmp/spdk.key-sha384.Qwe /tmp/spdk.key-sha512.U43 /tmp/spdk.key-sha512.8uM /tmp/spdk.key-sha384.DDJ /tmp/spdk.key-sha256.z14 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:15.702 00:17:15.702 real 3m9.789s 00:17:15.702 user 7m22.184s 00:17:15.702 sys 0m25.021s 00:17:15.702 16:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.702 16:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.702 ************************************ 00:17:15.702 END TEST nvmf_auth_target 00:17:15.702 ************************************ 00:17:15.702 16:33:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:15.702 16:33:54 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:15.702 16:33:54 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.702 16:33:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:15.702 16:33:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.702 16:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.702 ************************************ 00:17:15.702 START TEST nvmf_bdevio_no_huge 00:17:15.702 ************************************ 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.702 * Looking for test storage... 00:17:15.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.702 16:33:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:17.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:17.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:17.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:17.608 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:17.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.609 16:33:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:17:17.609 00:17:17.609 --- 10.0.0.2 ping statistics --- 00:17:17.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.609 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:17:17.609 00:17:17.609 --- 10.0.0.1 ping statistics --- 00:17:17.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.609 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1531904 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1531904 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1531904 ']' 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.609 16:33:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.609 [2024-07-15 16:33:57.087600] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:17.609 [2024-07-15 16:33:57.087690] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:17.609 [2024-07-15 16:33:57.163538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.867 [2024-07-15 16:33:57.286064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.867 [2024-07-15 16:33:57.286114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.867 [2024-07-15 16:33:57.286128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.867 [2024-07-15 16:33:57.286140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.867 [2024-07-15 16:33:57.286150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.867 [2024-07-15 16:33:57.286253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:17.867 [2024-07-15 16:33:57.286589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:17.867 [2024-07-15 16:33:57.286642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:17.867 [2024-07-15 16:33:57.286645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 [2024-07-15 16:33:58.071371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 Malloc0 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.807 [2024-07-15 16:33:58.109440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:18.807 { 00:17:18.807 "params": { 00:17:18.807 "name": "Nvme$subsystem", 00:17:18.807 "trtype": "$TEST_TRANSPORT", 00:17:18.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.807 "adrfam": "ipv4", 00:17:18.807 "trsvcid": "$NVMF_PORT", 00:17:18.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.807 "hdgst": ${hdgst:-false}, 00:17:18.807 "ddgst": ${ddgst:-false} 00:17:18.807 }, 00:17:18.807 "method": "bdev_nvme_attach_controller" 00:17:18.807 } 00:17:18.807 EOF 00:17:18.807 )") 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:18.807 16:33:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:18.807 "params": { 00:17:18.807 "name": "Nvme1", 00:17:18.807 "trtype": "tcp", 00:17:18.807 "traddr": "10.0.0.2", 00:17:18.807 "adrfam": "ipv4", 00:17:18.807 "trsvcid": "4420", 00:17:18.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.807 "hdgst": false, 00:17:18.807 "ddgst": false 00:17:18.807 }, 00:17:18.807 "method": "bdev_nvme_attach_controller" 00:17:18.807 }' 00:17:18.807 [2024-07-15 16:33:58.156945] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:18.807 [2024-07-15 16:33:58.157022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1532060 ] 00:17:18.807 [2024-07-15 16:33:58.220456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.807 [2024-07-15 16:33:58.336413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.807 [2024-07-15 16:33:58.336464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.807 [2024-07-15 16:33:58.336468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.067 I/O targets: 00:17:19.067 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:19.067 00:17:19.067 00:17:19.067 CUnit - A unit testing framework for C - Version 2.1-3 00:17:19.067 http://cunit.sourceforge.net/ 00:17:19.067 00:17:19.067 00:17:19.067 Suite: bdevio tests on: Nvme1n1 00:17:19.326 Test: blockdev write read block ...passed 00:17:19.326 Test: blockdev write zeroes read block ...passed 00:17:19.326 Test: blockdev write zeroes read no split ...passed 00:17:19.326 Test: blockdev write zeroes read split ...passed 00:17:19.326 Test: blockdev write zeroes read split partial ...passed 00:17:19.326 Test: blockdev reset ...[2024-07-15 16:33:58.827381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.326 [2024-07-15 16:33:58.827498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7fb0 (9): Bad file descriptor 00:17:19.326 [2024-07-15 16:33:58.884603] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.326 passed 00:17:19.326 Test: blockdev write read 8 blocks ...passed 00:17:19.326 Test: blockdev write read size > 128k ...passed 00:17:19.326 Test: blockdev write read invalid size ...passed 00:17:19.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:19.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:19.584 Test: blockdev write read max offset ...passed 00:17:19.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.584 Test: blockdev writev readv 8 blocks ...passed 00:17:19.584 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.584 Test: blockdev writev readv block ...passed 00:17:19.584 Test: blockdev writev readv size > 128k ...passed 00:17:19.584 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.584 Test: blockdev comparev and writev ...[2024-07-15 16:33:59.060608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.060643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.060667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.060684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.061101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.061126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.061173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.061540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.061564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.061591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.061608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.062003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.062027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.584 [2024-07-15 16:33:59.062064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.584 passed 00:17:19.584 Test: blockdev nvme passthru rw ...passed 00:17:19.584 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:33:59.144229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.584 [2024-07-15 16:33:59.144256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.144447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.584 [2024-07-15 16:33:59.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.144666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.584 [2024-07-15 16:33:59.144689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.584 [2024-07-15 16:33:59.144889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.584 [2024-07-15 16:33:59.144913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.584 passed 00:17:19.584 Test: blockdev nvme admin passthru ...passed 00:17:19.842 Test: blockdev copy ...passed 00:17:19.842 00:17:19.842 Run Summary: Type Total Ran Passed Failed Inactive 00:17:19.842 suites 1 1 n/a 0 0 00:17:19.842 tests 23 23 23 0 0 00:17:19.842 asserts 152 152 152 0 n/a 00:17:19.842 00:17:19.842 Elapsed time = 1.168 seconds 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.103 rmmod nvme_tcp 00:17:20.103 rmmod nvme_fabrics 00:17:20.103 rmmod nvme_keyring 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1531904 ']' 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1531904 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1531904 ']' 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1531904 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531904 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531904' 00:17:20.103 killing process with pid 1531904 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1531904 00:17:20.103 16:33:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1531904 00:17:20.671 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.671 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.671 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.671 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.672 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.672 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.672 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.672 16:34:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.582 16:34:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.582 00:17:22.582 real 0m7.333s 00:17:22.582 user 0m14.066s 00:17:22.582 sys 0m2.583s 00:17:22.582 16:34:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.582 16:34:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.582 ************************************ 00:17:22.582 END TEST nvmf_bdevio_no_huge 00:17:22.582 ************************************ 00:17:22.582 16:34:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.582 16:34:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.582 16:34:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.582 16:34:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.582 16:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.840 ************************************ 00:17:22.840 START TEST nvmf_tls 00:17:22.840 ************************************ 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.840 * Looking for test storage... 00:17:22.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.840 16:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.744 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.745 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:17:25.004 00:17:25.004 --- 10.0.0.2 ping statistics --- 00:17:25.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.004 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:25.004 00:17:25.004 --- 10.0.0.1 ping statistics --- 00:17:25.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.004 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.004 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1534138 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1534138 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1534138 ']' 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.005 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.005 [2024-07-15 16:34:04.459526] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:25.005 [2024-07-15 16:34:04.459608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.005 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.005 [2024-07-15 16:34:04.528283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.265 [2024-07-15 16:34:04.637306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.265 [2024-07-15 16:34:04.637366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.265 [2024-07-15 16:34:04.637387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.265 [2024-07-15 16:34:04.637413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.265 [2024-07-15 16:34:04.637422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.265 [2024-07-15 16:34:04.637450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:25.265 16:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:25.525 true 00:17:25.525 16:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.525 16:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:25.785 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:25.785 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:25.785 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:26.043 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.043 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:26.300 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:26.300 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:26.300 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:26.571 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.571 16:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:26.831 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:26.831 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:26.831 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.831 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:27.091 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:27.091 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:27.091 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:27.351 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:27.351 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:27.611 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:27.612 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:27.612 16:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:27.871 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:27.871 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:27.871 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:27.871 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:27.871 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:28.129 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1iNDqcG9mb 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dbxJf9H7jk 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1iNDqcG9mb 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dbxJf9H7jk 00:17:28.130 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.388 16:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:28.954 16:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1iNDqcG9mb 00:17:28.954 16:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1iNDqcG9mb 00:17:28.954 16:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:28.954 [2024-07-15 16:34:08.489605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.954 16:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:29.213 16:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:29.471 [2024-07-15 16:34:08.986978] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:29.471 [2024-07-15 16:34:08.987216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.472 16:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:29.730 malloc0 00:17:29.730 16:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.990 16:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1iNDqcG9mb 00:17:30.250 [2024-07-15 16:34:09.732283] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:30.250 16:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1iNDqcG9mb 00:17:30.250 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.289 Initializing NVMe Controllers 00:17:40.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.289 Initialization complete. Launching workers. 00:17:40.289 ======================================================== 00:17:40.289 Latency(us) 00:17:40.289 Device Information : IOPS MiB/s Average min max 00:17:40.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7854.31 30.68 8150.96 1314.97 9543.45 00:17:40.289 ======================================================== 00:17:40.289 Total : 7854.31 30.68 8150.96 1314.97 9543.45 00:17:40.289 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1iNDqcG9mb 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1iNDqcG9mb' 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1536033 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1536033 /var/tmp/bdevperf.sock 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1536033 ']' 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.289 16:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.546 [2024-07-15 16:34:19.898238] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:40.546 [2024-07-15 16:34:19.898315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536033 ] 00:17:40.546 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.546 [2024-07-15 16:34:19.956914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.546 [2024-07-15 16:34:20.069019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.804 16:34:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.804 16:34:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:40.804 16:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1iNDqcG9mb 00:17:41.061 [2024-07-15 16:34:20.410284] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.062 [2024-07-15 16:34:20.410389] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.062 TLSTESTn1 00:17:41.062 16:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.062 Running I/O for 10 seconds... 00:17:53.265 00:17:53.265 Latency(us) 00:17:53.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.265 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.265 Verification LBA range: start 0x0 length 0x2000 00:17:53.265 TLSTESTn1 : 10.05 2442.50 9.54 0.00 0.00 52271.77 10243.03 81167.55 00:17:53.265 =================================================================================================================== 00:17:53.265 Total : 2442.50 9.54 0.00 0.00 52271.77 10243.03 81167.55 00:17:53.265 0 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1536033 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1536033 ']' 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1536033 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1536033 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1536033' 00:17:53.265 killing process with pid 1536033 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1536033 00:17:53.265 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.265 00:17:53.265 Latency(us) 00:17:53.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.265 =================================================================================================================== 00:17:53.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.265 [2024-07-15 16:34:30.723590] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1536033 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dbxJf9H7jk 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dbxJf9H7jk 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dbxJf9H7jk 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dbxJf9H7jk' 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1537347 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1537347 /var/tmp/bdevperf.sock 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1537347 ']' 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.265 16:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.265 [2024-07-15 16:34:31.030478] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:53.266 [2024-07-15 16:34:31.030571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537347 ] 00:17:53.266 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.266 [2024-07-15 16:34:31.088614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.266 [2024-07-15 16:34:31.191075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dbxJf9H7jk 00:17:53.266 [2024-07-15 16:34:31.532568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.266 [2024-07-15 16:34:31.532679] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:53.266 [2024-07-15 16:34:31.542186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:53.266 [2024-07-15 16:34:31.542660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2f90 (107): Transport endpoint is not connected 00:17:53.266 [2024-07-15 16:34:31.543649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2f90 (9): Bad file descriptor 00:17:53.266 [2024-07-15 16:34:31.544648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:53.266 [2024-07-15 16:34:31.544673] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:53.266 [2024-07-15 16:34:31.544689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.266 request: 00:17:53.266 { 00:17:53.266 "name": "TLSTEST", 00:17:53.266 "trtype": "tcp", 00:17:53.266 "traddr": "10.0.0.2", 00:17:53.266 "adrfam": "ipv4", 00:17:53.266 "trsvcid": "4420", 00:17:53.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.266 "prchk_reftag": false, 00:17:53.266 "prchk_guard": false, 00:17:53.266 "hdgst": false, 00:17:53.266 "ddgst": false, 00:17:53.266 "psk": "/tmp/tmp.dbxJf9H7jk", 00:17:53.266 "method": "bdev_nvme_attach_controller", 00:17:53.266 "req_id": 1 00:17:53.266 } 00:17:53.266 Got JSON-RPC error response 00:17:53.266 response: 00:17:53.266 { 00:17:53.266 "code": -5, 00:17:53.266 "message": "Input/output error" 00:17:53.266 } 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1537347 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1537347 ']' 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1537347 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537347 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537347' 00:17:53.266 killing process with pid 1537347 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1537347 00:17:53.266 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.266 00:17:53.266 Latency(us) 00:17:53.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.266 =================================================================================================================== 00:17:53.266 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.266 [2024-07-15 16:34:31.597118] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1537347 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1iNDqcG9mb 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1iNDqcG9mb 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1iNDqcG9mb 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1iNDqcG9mb' 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1537386 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1537386 /var/tmp/bdevperf.sock 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1537386 ']' 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.266 16:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.266 [2024-07-15 16:34:31.904495] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:53.266 [2024-07-15 16:34:31.904584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537386 ] 00:17:53.266 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.266 [2024-07-15 16:34:31.970666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.266 [2024-07-15 16:34:32.088844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1iNDqcG9mb 00:17:53.266 [2024-07-15 16:34:32.477708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.266 [2024-07-15 16:34:32.477821] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:53.266 [2024-07-15 16:34:32.484480] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:53.266 [2024-07-15 16:34:32.484509] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:53.266 [2024-07-15 16:34:32.484560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:53.266 [2024-07-15 16:34:32.484670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd9f90 (107): Transport endpoint is not connected 00:17:53.266 [2024-07-15 16:34:32.485660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd9f90 (9): Bad file descriptor 00:17:53.266 [2024-07-15 16:34:32.486663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:53.266 [2024-07-15 16:34:32.486682] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:53.266 [2024-07-15 16:34:32.486697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.266 request: 00:17:53.266 { 00:17:53.266 "name": "TLSTEST", 00:17:53.266 "trtype": "tcp", 00:17:53.266 "traddr": "10.0.0.2", 00:17:53.266 "adrfam": "ipv4", 00:17:53.266 "trsvcid": "4420", 00:17:53.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.266 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:53.266 "prchk_reftag": false, 00:17:53.266 "prchk_guard": false, 00:17:53.266 "hdgst": false, 00:17:53.266 "ddgst": false, 00:17:53.266 "psk": "/tmp/tmp.1iNDqcG9mb", 00:17:53.266 "method": "bdev_nvme_attach_controller", 00:17:53.266 "req_id": 1 00:17:53.266 } 00:17:53.266 Got JSON-RPC error response 00:17:53.266 response: 00:17:53.266 { 00:17:53.266 "code": -5, 00:17:53.266 "message": "Input/output error" 00:17:53.266 } 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1537386 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1537386 ']' 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1537386 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537386 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537386' 00:17:53.266 killing process with pid 1537386 00:17:53.266 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1537386 00:17:53.266 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.266 00:17:53.266 Latency(us) 00:17:53.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.266 =================================================================================================================== 00:17:53.267 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.267 [2024-07-15 16:34:32.535283] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1537386 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1iNDqcG9mb 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1iNDqcG9mb 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1iNDqcG9mb 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1iNDqcG9mb' 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1537507 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1537507 /var/tmp/bdevperf.sock 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1537507 ']' 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.267 16:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.267 [2024-07-15 16:34:32.839339] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:53.267 [2024-07-15 16:34:32.839430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537507 ] 00:17:53.524 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.524 [2024-07-15 16:34:32.898224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.524 [2024-07-15 16:34:33.003262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.524 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.524 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.524 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1iNDqcG9mb 00:17:54.090 [2024-07-15 16:34:33.387568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.090 [2024-07-15 16:34:33.387695] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:54.090 [2024-07-15 16:34:33.396807] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:54.090 [2024-07-15 16:34:33.396839] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:54.090 [2024-07-15 16:34:33.396901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.090 [2024-07-15 16:34:33.397661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfccf90 (107): Transport endpoint is not connected 00:17:54.090 [2024-07-15 16:34:33.398651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfccf90 (9): Bad file descriptor 00:17:54.090 [2024-07-15 16:34:33.399650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:54.090 [2024-07-15 16:34:33.399670] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.090 [2024-07-15 16:34:33.399686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:54.090 request: 00:17:54.090 { 00:17:54.090 "name": "TLSTEST", 00:17:54.090 "trtype": "tcp", 00:17:54.090 "traddr": "10.0.0.2", 00:17:54.090 "adrfam": "ipv4", 00:17:54.090 "trsvcid": "4420", 00:17:54.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:54.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.090 "prchk_reftag": false, 00:17:54.090 "prchk_guard": false, 00:17:54.090 "hdgst": false, 00:17:54.090 "ddgst": false, 00:17:54.090 "psk": "/tmp/tmp.1iNDqcG9mb", 00:17:54.090 "method": "bdev_nvme_attach_controller", 00:17:54.090 "req_id": 1 00:17:54.090 } 00:17:54.090 Got JSON-RPC error response 00:17:54.090 response: 00:17:54.090 { 00:17:54.090 "code": -5, 00:17:54.090 "message": "Input/output error" 00:17:54.090 } 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1537507 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1537507 ']' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1537507 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537507 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537507' 00:17:54.090 killing process with pid 1537507 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1537507 00:17:54.090 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.090 00:17:54.090 Latency(us) 00:17:54.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.090 =================================================================================================================== 00:17:54.090 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.090 [2024-07-15 16:34:33.442807] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1537507 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1537645 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1537645 /var/tmp/bdevperf.sock 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1537645 ']' 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.090 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.348 [2024-07-15 16:34:33.721434] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:54.348 [2024-07-15 16:34:33.721507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537645 ] 00:17:54.348 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.348 [2024-07-15 16:34:33.780257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.348 [2024-07-15 16:34:33.885131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.605 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.605 16:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.605 16:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.864 [2024-07-15 16:34:34.239102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.864 [2024-07-15 16:34:34.241406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2535770 (9): Bad file descriptor 00:17:54.864 [2024-07-15 16:34:34.242404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.864 [2024-07-15 16:34:34.242424] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.864 [2024-07-15 16:34:34.242439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.864 request: 00:17:54.864 { 00:17:54.864 "name": "TLSTEST", 00:17:54.864 "trtype": "tcp", 00:17:54.864 "traddr": "10.0.0.2", 00:17:54.864 "adrfam": "ipv4", 00:17:54.864 "trsvcid": "4420", 00:17:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.864 "prchk_reftag": false, 00:17:54.864 "prchk_guard": false, 00:17:54.864 "hdgst": false, 00:17:54.864 "ddgst": false, 00:17:54.864 "method": "bdev_nvme_attach_controller", 00:17:54.864 "req_id": 1 00:17:54.864 } 00:17:54.864 Got JSON-RPC error response 00:17:54.864 response: 00:17:54.864 { 00:17:54.864 "code": -5, 00:17:54.864 "message": "Input/output error" 00:17:54.864 } 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1537645 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1537645 ']' 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1537645 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537645 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537645' 00:17:54.864 killing process with pid 1537645 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1537645 00:17:54.864 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.864 00:17:54.864 Latency(us) 00:17:54.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.864 =================================================================================================================== 00:17:54.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.864 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1537645 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1534138 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1534138 ']' 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1534138 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1534138 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1534138' 00:17:55.124 killing process with pid 1534138 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1534138 00:17:55.124 [2024-07-15 16:34:34.578462] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.124 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1534138 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.buU8qafjEd 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.buU8qafjEd 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1537795 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1537795 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1537795 ']' 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.382 16:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.640 [2024-07-15 16:34:34.989592] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:55.640 [2024-07-15 16:34:34.989688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.640 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.640 [2024-07-15 16:34:35.059963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.640 [2024-07-15 16:34:35.174160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.640 [2024-07-15 16:34:35.174228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.640 [2024-07-15 16:34:35.174255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.640 [2024-07-15 16:34:35.174268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.640 [2024-07-15 16:34:35.174288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.640 [2024-07-15 16:34:35.174318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.buU8qafjEd 00:17:56.574 16:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:56.574 [2024-07-15 16:34:36.157926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.833 16:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.833 16:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:57.089 [2024-07-15 16:34:36.647208] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.089 [2024-07-15 16:34:36.647470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.089 16:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.347 malloc0 00:17:57.347 16:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.682 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:17:57.940 [2024-07-15 16:34:37.376342] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.buU8qafjEd 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.buU8qafjEd' 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1538086 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1538086 /var/tmp/bdevperf.sock 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1538086 ']' 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.940 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.940 [2024-07-15 16:34:37.438917] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:57.940 [2024-07-15 16:34:37.439001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538086 ] 00:17:57.940 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.940 [2024-07-15 16:34:37.497947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.196 [2024-07-15 16:34:37.605803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.196 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.196 16:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:58.197 16:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:17:58.453 [2024-07-15 16:34:37.938235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.453 [2024-07-15 16:34:37.938357] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.453 TLSTESTn1 00:17:58.453 16:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.710 Running I/O for 10 seconds... 00:18:08.721 00:18:08.721 Latency(us) 00:18:08.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.721 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.721 Verification LBA range: start 0x0 length 0x2000 00:18:08.721 TLSTESTn1 : 10.05 2489.23 9.72 0.00 0.00 51282.26 5801.15 86216.25 00:18:08.721 =================================================================================================================== 00:18:08.721 Total : 2489.23 9.72 0.00 0.00 51282.26 5801.15 86216.25 00:18:08.721 0 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1538086 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1538086 ']' 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1538086 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538086 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538086' 00:18:08.721 killing process with pid 1538086 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1538086 00:18:08.721 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.721 00:18:08.721 Latency(us) 00:18:08.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.721 =================================================================================================================== 00:18:08.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.721 [2024-07-15 16:34:48.248124] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.721 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1538086 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.buU8qafjEd 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.buU8qafjEd 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.buU8qafjEd 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.buU8qafjEd 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.buU8qafjEd' 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1539402 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1539402 /var/tmp/bdevperf.sock 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1539402 ']' 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.981 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.981 [2024-07-15 16:34:48.568783] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:08.981 [2024-07-15 16:34:48.568891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539402 ] 00:18:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.239 [2024-07-15 16:34:48.628147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.239 [2024-07-15 16:34:48.739725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.498 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.498 16:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:09.498 16:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:18:09.498 [2024-07-15 16:34:49.068388] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.498 [2024-07-15 16:34:49.068468] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:09.498 [2024-07-15 16:34:49.068481] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.buU8qafjEd 00:18:09.498 request: 00:18:09.498 { 00:18:09.498 "name": "TLSTEST", 00:18:09.498 "trtype": "tcp", 00:18:09.498 "traddr": "10.0.0.2", 00:18:09.498 "adrfam": "ipv4", 00:18:09.498 "trsvcid": "4420", 00:18:09.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.498 "prchk_reftag": false, 00:18:09.498 "prchk_guard": false, 00:18:09.498 "hdgst": false, 00:18:09.498 "ddgst": false, 00:18:09.498 "psk": "/tmp/tmp.buU8qafjEd", 00:18:09.498 "method": "bdev_nvme_attach_controller", 00:18:09.498 "req_id": 1 00:18:09.498 } 00:18:09.498 Got JSON-RPC error response 00:18:09.498 response: 00:18:09.498 { 00:18:09.498 "code": -1, 00:18:09.498 "message": "Operation not permitted" 00:18:09.498 } 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1539402 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1539402 ']' 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1539402 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.498 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1539402 00:18:09.758 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:09.758 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:09.758 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1539402' 00:18:09.758 killing process with pid 1539402 00:18:09.758 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1539402 00:18:09.758 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.758 00:18:09.758 Latency(us) 00:18:09.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.758 =================================================================================================================== 00:18:09.758 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.758 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1539402 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1537795 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1537795 ']' 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1537795 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537795 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537795' 00:18:10.018 killing process with pid 1537795 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1537795 00:18:10.018 [2024-07-15 16:34:49.406100] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.018 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1537795 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1539544 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1539544 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1539544 ']' 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.278 16:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.278 [2024-07-15 16:34:49.767825] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:10.278 [2024-07-15 16:34:49.767941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.278 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.278 [2024-07-15 16:34:49.837846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.536 [2024-07-15 16:34:49.963936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.536 [2024-07-15 16:34:49.963989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.536 [2024-07-15 16:34:49.964004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.536 [2024-07-15 16:34:49.964016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.536 [2024-07-15 16:34:49.964026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.536 [2024-07-15 16:34:49.964052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.buU8qafjEd 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:11.475 [2024-07-15 16:34:50.961391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.475 16:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:11.733 16:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:11.990 [2024-07-15 16:34:51.462723] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.990 [2024-07-15 16:34:51.462995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.990 16:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.248 malloc0 00:18:12.248 16:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.506 16:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:18:12.763 [2024-07-15 16:34:52.315766] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:12.763 [2024-07-15 16:34:52.315811] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:12.763 [2024-07-15 16:34:52.315857] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:12.763 request: 00:18:12.763 { 00:18:12.763 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.763 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.763 "psk": "/tmp/tmp.buU8qafjEd", 00:18:12.763 "method": "nvmf_subsystem_add_host", 00:18:12.763 "req_id": 1 00:18:12.763 } 00:18:12.763 Got JSON-RPC error response 00:18:12.763 response: 00:18:12.763 { 00:18:12.763 "code": -32603, 00:18:12.763 "message": "Internal error" 00:18:12.763 } 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1539544 00:18:12.763 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1539544 ']' 00:18:12.764 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1539544 00:18:12.764 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.764 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.764 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1539544 00:18:13.021 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.021 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.021 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1539544' 00:18:13.021 killing process with pid 1539544 00:18:13.021 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1539544 00:18:13.021 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1539544 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.buU8qafjEd 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1539968 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1539968 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1539968 ']' 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.281 16:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.281 [2024-07-15 16:34:52.723852] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:13.281 [2024-07-15 16:34:52.723945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.281 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.281 [2024-07-15 16:34:52.799288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.540 [2024-07-15 16:34:52.915719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.540 [2024-07-15 16:34:52.915783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.540 [2024-07-15 16:34:52.915796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.540 [2024-07-15 16:34:52.915807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.540 [2024-07-15 16:34:52.915816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.540 [2024-07-15 16:34:52.915843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.buU8qafjEd 00:18:13.540 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.801 [2024-07-15 16:34:53.278846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.801 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.179 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.179 [2024-07-15 16:34:53.764113] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.179 [2024-07-15 16:34:53.764410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.439 16:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.439 malloc0 00:18:14.697 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.697 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:18:14.955 [2024-07-15 16:34:54.506301] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1540251 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1540251 /var/tmp/bdevperf.sock 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1540251 ']' 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.955 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 [2024-07-15 16:34:54.567337] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:15.212 [2024-07-15 16:34:54.567410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540251 ] 00:18:15.212 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.212 [2024-07-15 16:34:54.625031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.212 [2024-07-15 16:34:54.729809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.469 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.469 16:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.469 16:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:18:15.469 [2024-07-15 16:34:55.053504] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.469 [2024-07-15 16:34:55.053620] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.727 TLSTESTn1 00:18:15.727 16:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:15.985 16:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:15.985 "subsystems": [ 00:18:15.985 { 00:18:15.985 "subsystem": "keyring", 00:18:15.985 "config": [] 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "subsystem": "iobuf", 00:18:15.985 "config": [ 00:18:15.985 { 00:18:15.985 "method": "iobuf_set_options", 00:18:15.985 "params": { 00:18:15.985 "small_pool_count": 8192, 00:18:15.985 "large_pool_count": 1024, 00:18:15.985 "small_bufsize": 8192, 00:18:15.985 "large_bufsize": 135168 00:18:15.985 } 00:18:15.985 } 00:18:15.985 ] 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "subsystem": "sock", 00:18:15.985 "config": [ 00:18:15.985 { 00:18:15.985 "method": "sock_set_default_impl", 00:18:15.985 "params": { 00:18:15.985 "impl_name": "posix" 00:18:15.985 } 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "method": "sock_impl_set_options", 00:18:15.985 "params": { 00:18:15.985 "impl_name": "ssl", 00:18:15.985 "recv_buf_size": 4096, 00:18:15.985 "send_buf_size": 4096, 00:18:15.985 "enable_recv_pipe": true, 00:18:15.985 "enable_quickack": false, 00:18:15.985 "enable_placement_id": 0, 00:18:15.985 "enable_zerocopy_send_server": true, 00:18:15.985 "enable_zerocopy_send_client": false, 00:18:15.985 "zerocopy_threshold": 0, 00:18:15.985 "tls_version": 0, 00:18:15.985 "enable_ktls": false 00:18:15.985 } 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "method": "sock_impl_set_options", 00:18:15.985 "params": { 00:18:15.985 "impl_name": "posix", 00:18:15.985 "recv_buf_size": 2097152, 00:18:15.985 "send_buf_size": 2097152, 00:18:15.985 "enable_recv_pipe": true, 00:18:15.985 "enable_quickack": false, 00:18:15.985 "enable_placement_id": 0, 00:18:15.985 "enable_zerocopy_send_server": true, 00:18:15.985 "enable_zerocopy_send_client": false, 00:18:15.985 "zerocopy_threshold": 0, 00:18:15.985 "tls_version": 0, 00:18:15.985 "enable_ktls": false 00:18:15.985 } 00:18:15.985 } 00:18:15.985 ] 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "subsystem": "vmd", 00:18:15.985 "config": [] 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "subsystem": "accel", 00:18:15.985 "config": [ 00:18:15.985 { 00:18:15.985 "method": "accel_set_options", 00:18:15.985 "params": { 00:18:15.985 "small_cache_size": 128, 00:18:15.985 "large_cache_size": 16, 00:18:15.985 "task_count": 2048, 00:18:15.985 "sequence_count": 2048, 00:18:15.985 "buf_count": 2048 00:18:15.985 } 00:18:15.985 } 00:18:15.985 ] 00:18:15.985 }, 00:18:15.985 { 00:18:15.985 "subsystem": "bdev", 00:18:15.985 "config": [ 00:18:15.986 { 00:18:15.986 "method": "bdev_set_options", 00:18:15.986 "params": { 00:18:15.986 "bdev_io_pool_size": 65535, 00:18:15.986 "bdev_io_cache_size": 256, 00:18:15.986 "bdev_auto_examine": true, 00:18:15.986 "iobuf_small_cache_size": 128, 00:18:15.986 "iobuf_large_cache_size": 16 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_raid_set_options", 00:18:15.986 "params": { 00:18:15.986 "process_window_size_kb": 1024 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_iscsi_set_options", 00:18:15.986 "params": { 00:18:15.986 "timeout_sec": 30 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_nvme_set_options", 00:18:15.986 "params": { 00:18:15.986 "action_on_timeout": "none", 00:18:15.986 "timeout_us": 0, 00:18:15.986 "timeout_admin_us": 0, 00:18:15.986 "keep_alive_timeout_ms": 10000, 00:18:15.986 "arbitration_burst": 0, 00:18:15.986 "low_priority_weight": 0, 00:18:15.986 "medium_priority_weight": 0, 00:18:15.986 "high_priority_weight": 0, 00:18:15.986 "nvme_adminq_poll_period_us": 10000, 00:18:15.986 "nvme_ioq_poll_period_us": 0, 00:18:15.986 "io_queue_requests": 0, 00:18:15.986 "delay_cmd_submit": true, 00:18:15.986 "transport_retry_count": 4, 00:18:15.986 "bdev_retry_count": 3, 00:18:15.986 "transport_ack_timeout": 0, 00:18:15.986 "ctrlr_loss_timeout_sec": 0, 00:18:15.986 "reconnect_delay_sec": 0, 00:18:15.986 "fast_io_fail_timeout_sec": 0, 00:18:15.986 "disable_auto_failback": false, 00:18:15.986 "generate_uuids": false, 00:18:15.986 "transport_tos": 0, 00:18:15.986 "nvme_error_stat": false, 00:18:15.986 "rdma_srq_size": 0, 00:18:15.986 "io_path_stat": false, 00:18:15.986 "allow_accel_sequence": false, 00:18:15.986 "rdma_max_cq_size": 0, 00:18:15.986 "rdma_cm_event_timeout_ms": 0, 00:18:15.986 "dhchap_digests": [ 00:18:15.986 "sha256", 00:18:15.986 "sha384", 00:18:15.986 "sha512" 00:18:15.986 ], 00:18:15.986 "dhchap_dhgroups": [ 00:18:15.986 "null", 00:18:15.986 "ffdhe2048", 00:18:15.986 "ffdhe3072", 00:18:15.986 "ffdhe4096", 00:18:15.986 "ffdhe6144", 00:18:15.986 "ffdhe8192" 00:18:15.986 ] 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_nvme_set_hotplug", 00:18:15.986 "params": { 00:18:15.986 "period_us": 100000, 00:18:15.986 "enable": false 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_malloc_create", 00:18:15.986 "params": { 00:18:15.986 "name": "malloc0", 00:18:15.986 "num_blocks": 8192, 00:18:15.986 "block_size": 4096, 00:18:15.986 "physical_block_size": 4096, 00:18:15.986 "uuid": "e18077e9-2161-4c83-82c8-924097b95f8b", 00:18:15.986 "optimal_io_boundary": 0 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "bdev_wait_for_examine" 00:18:15.986 } 00:18:15.986 ] 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "subsystem": "nbd", 00:18:15.986 "config": [] 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "subsystem": "scheduler", 00:18:15.986 "config": [ 00:18:15.986 { 00:18:15.986 "method": "framework_set_scheduler", 00:18:15.986 "params": { 00:18:15.986 "name": "static" 00:18:15.986 } 00:18:15.986 } 00:18:15.986 ] 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "subsystem": "nvmf", 00:18:15.986 "config": [ 00:18:15.986 { 00:18:15.986 "method": "nvmf_set_config", 00:18:15.986 "params": { 00:18:15.986 "discovery_filter": "match_any", 00:18:15.986 "admin_cmd_passthru": { 00:18:15.986 "identify_ctrlr": false 00:18:15.986 } 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_set_max_subsystems", 00:18:15.986 "params": { 00:18:15.986 "max_subsystems": 1024 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_set_crdt", 00:18:15.986 "params": { 00:18:15.986 "crdt1": 0, 00:18:15.986 "crdt2": 0, 00:18:15.986 "crdt3": 0 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_create_transport", 00:18:15.986 "params": { 00:18:15.986 "trtype": "TCP", 00:18:15.986 "max_queue_depth": 128, 00:18:15.986 "max_io_qpairs_per_ctrlr": 127, 00:18:15.986 "in_capsule_data_size": 4096, 00:18:15.986 "max_io_size": 131072, 00:18:15.986 "io_unit_size": 131072, 00:18:15.986 "max_aq_depth": 128, 00:18:15.986 "num_shared_buffers": 511, 00:18:15.986 "buf_cache_size": 4294967295, 00:18:15.986 "dif_insert_or_strip": false, 00:18:15.986 "zcopy": false, 00:18:15.986 "c2h_success": false, 00:18:15.986 "sock_priority": 0, 00:18:15.986 "abort_timeout_sec": 1, 00:18:15.986 "ack_timeout": 0, 00:18:15.986 "data_wr_pool_size": 0 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_create_subsystem", 00:18:15.986 "params": { 00:18:15.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.986 "allow_any_host": false, 00:18:15.986 "serial_number": "SPDK00000000000001", 00:18:15.986 "model_number": "SPDK bdev Controller", 00:18:15.986 "max_namespaces": 10, 00:18:15.986 "min_cntlid": 1, 00:18:15.986 "max_cntlid": 65519, 00:18:15.986 "ana_reporting": false 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_subsystem_add_host", 00:18:15.986 "params": { 00:18:15.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.986 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.986 "psk": "/tmp/tmp.buU8qafjEd" 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_subsystem_add_ns", 00:18:15.986 "params": { 00:18:15.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.986 "namespace": { 00:18:15.986 "nsid": 1, 00:18:15.986 "bdev_name": "malloc0", 00:18:15.986 "nguid": "E18077E921614C8382C8924097B95F8B", 00:18:15.986 "uuid": "e18077e9-2161-4c83-82c8-924097b95f8b", 00:18:15.986 "no_auto_visible": false 00:18:15.986 } 00:18:15.986 } 00:18:15.986 }, 00:18:15.986 { 00:18:15.986 "method": "nvmf_subsystem_add_listener", 00:18:15.986 "params": { 00:18:15.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.986 "listen_address": { 00:18:15.986 "trtype": "TCP", 00:18:15.986 "adrfam": "IPv4", 00:18:15.986 "traddr": "10.0.0.2", 00:18:15.986 "trsvcid": "4420" 00:18:15.986 }, 00:18:15.986 "secure_channel": true 00:18:15.986 } 00:18:15.986 } 00:18:15.986 ] 00:18:15.986 } 00:18:15.986 ] 00:18:15.986 }' 00:18:15.986 16:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:16.246 16:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:16.246 "subsystems": [ 00:18:16.246 { 00:18:16.246 "subsystem": "keyring", 00:18:16.246 "config": [] 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "subsystem": "iobuf", 00:18:16.246 "config": [ 00:18:16.246 { 00:18:16.246 "method": "iobuf_set_options", 00:18:16.246 "params": { 00:18:16.246 "small_pool_count": 8192, 00:18:16.246 "large_pool_count": 1024, 00:18:16.246 "small_bufsize": 8192, 00:18:16.246 "large_bufsize": 135168 00:18:16.246 } 00:18:16.246 } 00:18:16.246 ] 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "subsystem": "sock", 00:18:16.246 "config": [ 00:18:16.246 { 00:18:16.246 "method": "sock_set_default_impl", 00:18:16.246 "params": { 00:18:16.246 "impl_name": "posix" 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "sock_impl_set_options", 00:18:16.246 "params": { 00:18:16.246 "impl_name": "ssl", 00:18:16.246 "recv_buf_size": 4096, 00:18:16.246 "send_buf_size": 4096, 00:18:16.246 "enable_recv_pipe": true, 00:18:16.246 "enable_quickack": false, 00:18:16.246 "enable_placement_id": 0, 00:18:16.246 "enable_zerocopy_send_server": true, 00:18:16.246 "enable_zerocopy_send_client": false, 00:18:16.246 "zerocopy_threshold": 0, 00:18:16.246 "tls_version": 0, 00:18:16.246 "enable_ktls": false 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "sock_impl_set_options", 00:18:16.246 "params": { 00:18:16.246 "impl_name": "posix", 00:18:16.246 "recv_buf_size": 2097152, 00:18:16.246 "send_buf_size": 2097152, 00:18:16.246 "enable_recv_pipe": true, 00:18:16.246 "enable_quickack": false, 00:18:16.246 "enable_placement_id": 0, 00:18:16.246 "enable_zerocopy_send_server": true, 00:18:16.246 "enable_zerocopy_send_client": false, 00:18:16.246 "zerocopy_threshold": 0, 00:18:16.246 "tls_version": 0, 00:18:16.246 "enable_ktls": false 00:18:16.246 } 00:18:16.246 } 00:18:16.246 ] 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "subsystem": "vmd", 00:18:16.246 "config": [] 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "subsystem": "accel", 00:18:16.246 "config": [ 00:18:16.246 { 00:18:16.246 "method": "accel_set_options", 00:18:16.246 "params": { 00:18:16.246 "small_cache_size": 128, 00:18:16.246 "large_cache_size": 16, 00:18:16.246 "task_count": 2048, 00:18:16.246 "sequence_count": 2048, 00:18:16.246 "buf_count": 2048 00:18:16.246 } 00:18:16.246 } 00:18:16.246 ] 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "subsystem": "bdev", 00:18:16.246 "config": [ 00:18:16.246 { 00:18:16.246 "method": "bdev_set_options", 00:18:16.246 "params": { 00:18:16.246 "bdev_io_pool_size": 65535, 00:18:16.246 "bdev_io_cache_size": 256, 00:18:16.246 "bdev_auto_examine": true, 00:18:16.246 "iobuf_small_cache_size": 128, 00:18:16.246 "iobuf_large_cache_size": 16 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "bdev_raid_set_options", 00:18:16.246 "params": { 00:18:16.246 "process_window_size_kb": 1024 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "bdev_iscsi_set_options", 00:18:16.246 "params": { 00:18:16.246 "timeout_sec": 30 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "bdev_nvme_set_options", 00:18:16.246 "params": { 00:18:16.246 "action_on_timeout": "none", 00:18:16.246 "timeout_us": 0, 00:18:16.246 "timeout_admin_us": 0, 00:18:16.246 "keep_alive_timeout_ms": 10000, 00:18:16.246 "arbitration_burst": 0, 00:18:16.246 "low_priority_weight": 0, 00:18:16.246 "medium_priority_weight": 0, 00:18:16.246 "high_priority_weight": 0, 00:18:16.246 "nvme_adminq_poll_period_us": 10000, 00:18:16.246 "nvme_ioq_poll_period_us": 0, 00:18:16.246 "io_queue_requests": 512, 00:18:16.246 "delay_cmd_submit": true, 00:18:16.246 "transport_retry_count": 4, 00:18:16.246 "bdev_retry_count": 3, 00:18:16.246 "transport_ack_timeout": 0, 00:18:16.246 "ctrlr_loss_timeout_sec": 0, 00:18:16.246 "reconnect_delay_sec": 0, 00:18:16.246 "fast_io_fail_timeout_sec": 0, 00:18:16.246 "disable_auto_failback": false, 00:18:16.246 "generate_uuids": false, 00:18:16.246 "transport_tos": 0, 00:18:16.246 "nvme_error_stat": false, 00:18:16.246 "rdma_srq_size": 0, 00:18:16.246 "io_path_stat": false, 00:18:16.246 "allow_accel_sequence": false, 00:18:16.246 "rdma_max_cq_size": 0, 00:18:16.246 "rdma_cm_event_timeout_ms": 0, 00:18:16.246 "dhchap_digests": [ 00:18:16.246 "sha256", 00:18:16.246 "sha384", 00:18:16.246 "sha512" 00:18:16.246 ], 00:18:16.246 "dhchap_dhgroups": [ 00:18:16.246 "null", 00:18:16.246 "ffdhe2048", 00:18:16.246 "ffdhe3072", 00:18:16.246 "ffdhe4096", 00:18:16.246 "ffdhe6144", 00:18:16.246 "ffdhe8192" 00:18:16.246 ] 00:18:16.246 } 00:18:16.246 }, 00:18:16.246 { 00:18:16.246 "method": "bdev_nvme_attach_controller", 00:18:16.246 "params": { 00:18:16.246 "name": "TLSTEST", 00:18:16.246 "trtype": "TCP", 00:18:16.246 "adrfam": "IPv4", 00:18:16.246 "traddr": "10.0.0.2", 00:18:16.246 "trsvcid": "4420", 00:18:16.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.246 "prchk_reftag": false, 00:18:16.246 "prchk_guard": false, 00:18:16.246 "ctrlr_loss_timeout_sec": 0, 00:18:16.246 "reconnect_delay_sec": 0, 00:18:16.246 "fast_io_fail_timeout_sec": 0, 00:18:16.247 "psk": "/tmp/tmp.buU8qafjEd", 00:18:16.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.247 "hdgst": false, 00:18:16.247 "ddgst": false 00:18:16.247 } 00:18:16.247 }, 00:18:16.247 { 00:18:16.247 "method": "bdev_nvme_set_hotplug", 00:18:16.247 "params": { 00:18:16.247 "period_us": 100000, 00:18:16.247 "enable": false 00:18:16.247 } 00:18:16.247 }, 00:18:16.247 { 00:18:16.247 "method": "bdev_wait_for_examine" 00:18:16.247 } 00:18:16.247 ] 00:18:16.247 }, 00:18:16.247 { 00:18:16.247 "subsystem": "nbd", 00:18:16.247 "config": [] 00:18:16.247 } 00:18:16.247 ] 00:18:16.247 }' 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1540251 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1540251 ']' 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1540251 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1540251 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1540251' 00:18:16.247 killing process with pid 1540251 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1540251 00:18:16.247 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.247 00:18:16.247 Latency(us) 00:18:16.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.247 =================================================================================================================== 00:18:16.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.247 [2024-07-15 16:34:55.795056] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.247 16:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1540251 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1539968 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1539968 ']' 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1539968 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1539968 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1539968' 00:18:16.505 killing process with pid 1539968 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1539968 00:18:16.505 [2024-07-15 16:34:56.090964] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:16.505 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1539968 00:18:17.075 16:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:17.075 16:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.075 16:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:17.075 "subsystems": [ 00:18:17.075 { 00:18:17.075 "subsystem": "keyring", 00:18:17.075 "config": [] 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "subsystem": "iobuf", 00:18:17.075 "config": [ 00:18:17.075 { 00:18:17.075 "method": "iobuf_set_options", 00:18:17.075 "params": { 00:18:17.075 "small_pool_count": 8192, 00:18:17.075 "large_pool_count": 1024, 00:18:17.075 "small_bufsize": 8192, 00:18:17.075 "large_bufsize": 135168 00:18:17.075 } 00:18:17.075 } 00:18:17.075 ] 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "subsystem": "sock", 00:18:17.075 "config": [ 00:18:17.075 { 00:18:17.075 "method": "sock_set_default_impl", 00:18:17.075 "params": { 00:18:17.075 "impl_name": "posix" 00:18:17.075 } 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "method": "sock_impl_set_options", 00:18:17.075 "params": { 00:18:17.075 "impl_name": "ssl", 00:18:17.075 "recv_buf_size": 4096, 00:18:17.075 "send_buf_size": 4096, 00:18:17.075 "enable_recv_pipe": true, 00:18:17.075 "enable_quickack": false, 00:18:17.075 "enable_placement_id": 0, 00:18:17.075 "enable_zerocopy_send_server": true, 00:18:17.075 "enable_zerocopy_send_client": false, 00:18:17.075 "zerocopy_threshold": 0, 00:18:17.075 "tls_version": 0, 00:18:17.075 "enable_ktls": false 00:18:17.075 } 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "method": "sock_impl_set_options", 00:18:17.075 "params": { 00:18:17.075 "impl_name": "posix", 00:18:17.075 "recv_buf_size": 2097152, 00:18:17.075 "send_buf_size": 2097152, 00:18:17.075 "enable_recv_pipe": true, 00:18:17.075 "enable_quickack": false, 00:18:17.075 "enable_placement_id": 0, 00:18:17.075 "enable_zerocopy_send_server": true, 00:18:17.075 "enable_zerocopy_send_client": false, 00:18:17.075 "zerocopy_threshold": 0, 00:18:17.075 "tls_version": 0, 00:18:17.075 "enable_ktls": false 00:18:17.075 } 00:18:17.075 } 00:18:17.075 ] 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "subsystem": "vmd", 00:18:17.075 "config": [] 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "subsystem": "accel", 00:18:17.075 "config": [ 00:18:17.075 { 00:18:17.075 "method": "accel_set_options", 00:18:17.075 "params": { 00:18:17.075 "small_cache_size": 128, 00:18:17.075 "large_cache_size": 16, 00:18:17.075 "task_count": 2048, 00:18:17.075 "sequence_count": 2048, 00:18:17.075 "buf_count": 2048 00:18:17.075 } 00:18:17.075 } 00:18:17.075 ] 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "subsystem": "bdev", 00:18:17.075 "config": [ 00:18:17.075 { 00:18:17.075 "method": "bdev_set_options", 00:18:17.075 "params": { 00:18:17.075 "bdev_io_pool_size": 65535, 00:18:17.075 "bdev_io_cache_size": 256, 00:18:17.075 "bdev_auto_examine": true, 00:18:17.075 "iobuf_small_cache_size": 128, 00:18:17.075 "iobuf_large_cache_size": 16 00:18:17.075 } 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "method": "bdev_raid_set_options", 00:18:17.075 "params": { 00:18:17.075 "process_window_size_kb": 1024 00:18:17.075 } 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "method": "bdev_iscsi_set_options", 00:18:17.075 "params": { 00:18:17.075 "timeout_sec": 30 00:18:17.075 } 00:18:17.075 }, 00:18:17.075 { 00:18:17.075 "method": "bdev_nvme_set_options", 00:18:17.075 "params": { 00:18:17.075 "action_on_timeout": "none", 00:18:17.075 "timeout_us": 0, 00:18:17.075 "timeout_admin_us": 0, 00:18:17.076 "keep_alive_timeout_ms": 10000, 00:18:17.076 "arbitration_burst": 0, 00:18:17.076 "low_priority_weight": 0, 00:18:17.076 "medium_priority_weight": 0, 00:18:17.076 "high_priority_weight": 0, 00:18:17.076 "nvme_adminq_poll_period_us": 10000, 00:18:17.076 "nvme_ioq_poll_period_us": 0, 00:18:17.076 "io_queue_requests": 0, 00:18:17.076 "delay_cmd_submit": true, 00:18:17.076 "transport_retry_count": 4, 00:18:17.076 "bdev_retry_count": 3, 00:18:17.076 "transport_ack_timeout": 0, 00:18:17.076 "ctrlr_loss_timeout_sec": 0, 00:18:17.076 "reconnect_delay_sec": 0, 00:18:17.076 "fast_io_fail_timeout_sec": 0, 00:18:17.076 "disable_auto_failback": false, 00:18:17.076 "generate_uuids": false, 00:18:17.076 "transport_tos": 0, 00:18:17.076 "nvme_error_stat": false, 00:18:17.076 "rdma_srq_size": 0, 00:18:17.076 "io_path_stat": false, 00:18:17.076 "allow_accel_sequence": false, 00:18:17.076 "rdma_max_cq_size": 0, 00:18:17.076 "rdma_cm_event_timeout_ms": 0, 00:18:17.076 "dhchap_digests": [ 00:18:17.076 "sha256", 00:18:17.076 "sha384", 00:18:17.076 "sha512" 00:18:17.076 ], 00:18:17.076 "dhchap_dhgroups": [ 00:18:17.076 "null", 00:18:17.076 "ffdhe2048", 00:18:17.076 "ffdhe3072", 00:18:17.076 "ffdhe4096", 00:18:17.076 "ffdhe6144", 00:18:17.076 "ffdhe8192" 00:18:17.076 ] 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "bdev_nvme_set_hotplug", 00:18:17.076 "params": { 00:18:17.076 "period_us": 100000, 00:18:17.076 "enable": false 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "bdev_malloc_create", 00:18:17.076 "params": { 00:18:17.076 "name": "malloc0", 00:18:17.076 "num_blocks": 8192, 00:18:17.076 "block_size": 4096, 00:18:17.076 "physical_block_size": 4096, 00:18:17.076 "uuid": "e18077e9-2161-4c83-82c8-924097b95f8b", 00:18:17.076 "optimal_io_boundary": 0 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "bdev_wait_for_examine" 00:18:17.076 } 00:18:17.076 ] 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "subsystem": "nbd", 00:18:17.076 "config": [] 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "subsystem": "scheduler", 00:18:17.076 "config": [ 00:18:17.076 { 00:18:17.076 "method": "framework_set_scheduler", 00:18:17.076 "params": { 00:18:17.076 "name": "static" 00:18:17.076 } 00:18:17.076 } 00:18:17.076 ] 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "subsystem": "nvmf", 00:18:17.076 "config": [ 00:18:17.076 { 00:18:17.076 "method": "nvmf_set_config", 00:18:17.076 "params": { 00:18:17.076 "discovery_filter": "match_any", 00:18:17.076 "admin_cmd_passthru": { 00:18:17.076 "identify_ctrlr": false 00:18:17.076 } 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_set_max_subsystems", 00:18:17.076 "params": { 00:18:17.076 "max_subsystems": 1024 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_set_crdt", 00:18:17.076 "params": { 00:18:17.076 "crdt1": 0, 00:18:17.076 "crdt2": 0, 00:18:17.076 "crdt3": 0 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_create_transport", 00:18:17.076 "params": { 00:18:17.076 "trtype": "TCP", 00:18:17.076 "max_queue_depth": 128, 00:18:17.076 "max_io_qpairs_per_ctrlr": 127, 00:18:17.076 "in_capsule_data_size": 4096, 00:18:17.076 "max_io_size": 131072, 00:18:17.076 "io_unit_size": 131072, 00:18:17.076 "max_aq_depth": 128, 00:18:17.076 "num_shared_buffers": 511, 00:18:17.076 "buf_cache_size": 4294967295, 00:18:17.076 "dif_insert_or_strip": false, 00:18:17.076 "zcopy": false, 00:18:17.076 "c2h_success": false, 00:18:17.076 "sock_priority": 0, 00:18:17.076 "abort_timeout_sec": 1, 00:18:17.076 "ack_timeout": 0, 00:18:17.076 "data_wr_pool_size": 0 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_create_subsystem", 00:18:17.076 "params": { 00:18:17.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.076 "allow_any_host": false, 00:18:17.076 "serial_number": "SPDK00000000000001", 00:18:17.076 "model_number": "SPDK bdev Controller", 00:18:17.076 "max_namespaces": 10, 00:18:17.076 "min_cntlid": 1, 00:18:17.076 "max_cntlid": 65519, 00:18:17.076 "ana_reporting": false 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_subsystem_add_host", 00:18:17.076 "params": { 00:18:17.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.076 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.076 "psk": "/tmp/tmp.buU8qafjEd" 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_subsystem_add_ns", 00:18:17.076 "params": { 00:18:17.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.076 "namespace": { 00:18:17.076 "nsid": 1, 00:18:17.076 "bdev_name": "malloc0", 00:18:17.076 "nguid": "E18077E921614C8382C8924097B95F8B", 00:18:17.076 "uuid": "e18077e9-2161-4c83-82c8-924097b95f8b", 00:18:17.076 "no_auto_visible": false 00:18:17.076 } 00:18:17.076 } 00:18:17.076 }, 00:18:17.076 { 00:18:17.076 "method": "nvmf_subsystem_add_listener", 00:18:17.076 "params": { 00:18:17.076 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.076 "listen_address": { 00:18:17.076 "trtype": "TCP", 00:18:17.076 "adrfam": "IPv4", 00:18:17.076 "traddr": "10.0.0.2", 00:18:17.076 "trsvcid": "4420" 00:18:17.076 }, 00:18:17.076 "secure_channel": true 00:18:17.076 } 00:18:17.076 } 00:18:17.076 ] 00:18:17.076 } 00:18:17.076 ] 00:18:17.076 }' 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1540412 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1540412 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1540412 ']' 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.076 16:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.076 [2024-07-15 16:34:56.439985] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:17.076 [2024-07-15 16:34:56.440080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.076 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.076 [2024-07-15 16:34:56.503080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.076 [2024-07-15 16:34:56.610222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.076 [2024-07-15 16:34:56.610288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.076 [2024-07-15 16:34:56.610301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.076 [2024-07-15 16:34:56.610312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.076 [2024-07-15 16:34:56.610321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.076 [2024-07-15 16:34:56.610404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.336 [2024-07-15 16:34:56.840235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.336 [2024-07-15 16:34:56.856191] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.336 [2024-07-15 16:34:56.872253] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.336 [2024-07-15 16:34:56.887032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1540562 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1540562 /var/tmp/bdevperf.sock 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1540562 ']' 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.903 16:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:17.903 "subsystems": [ 00:18:17.903 { 00:18:17.903 "subsystem": "keyring", 00:18:17.903 "config": [] 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "subsystem": "iobuf", 00:18:17.903 "config": [ 00:18:17.903 { 00:18:17.903 "method": "iobuf_set_options", 00:18:17.903 "params": { 00:18:17.903 "small_pool_count": 8192, 00:18:17.903 "large_pool_count": 1024, 00:18:17.903 "small_bufsize": 8192, 00:18:17.903 "large_bufsize": 135168 00:18:17.903 } 00:18:17.903 } 00:18:17.903 ] 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "subsystem": "sock", 00:18:17.903 "config": [ 00:18:17.903 { 00:18:17.903 "method": "sock_set_default_impl", 00:18:17.903 "params": { 00:18:17.903 "impl_name": "posix" 00:18:17.903 } 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "method": "sock_impl_set_options", 00:18:17.903 "params": { 00:18:17.903 "impl_name": "ssl", 00:18:17.903 "recv_buf_size": 4096, 00:18:17.903 "send_buf_size": 4096, 00:18:17.903 "enable_recv_pipe": true, 00:18:17.903 "enable_quickack": false, 00:18:17.903 "enable_placement_id": 0, 00:18:17.903 "enable_zerocopy_send_server": true, 00:18:17.903 "enable_zerocopy_send_client": false, 00:18:17.903 "zerocopy_threshold": 0, 00:18:17.903 "tls_version": 0, 00:18:17.903 "enable_ktls": false 00:18:17.903 } 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "method": "sock_impl_set_options", 00:18:17.903 "params": { 00:18:17.903 "impl_name": "posix", 00:18:17.903 "recv_buf_size": 2097152, 00:18:17.903 "send_buf_size": 2097152, 00:18:17.903 "enable_recv_pipe": true, 00:18:17.903 "enable_quickack": false, 00:18:17.903 "enable_placement_id": 0, 00:18:17.903 "enable_zerocopy_send_server": true, 00:18:17.903 "enable_zerocopy_send_client": false, 00:18:17.903 "zerocopy_threshold": 0, 00:18:17.903 "tls_version": 0, 00:18:17.903 "enable_ktls": false 00:18:17.903 } 00:18:17.903 } 00:18:17.903 ] 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "subsystem": "vmd", 00:18:17.903 "config": [] 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "subsystem": "accel", 00:18:17.903 "config": [ 00:18:17.903 { 00:18:17.903 "method": "accel_set_options", 00:18:17.903 "params": { 00:18:17.903 "small_cache_size": 128, 00:18:17.903 "large_cache_size": 16, 00:18:17.903 "task_count": 2048, 00:18:17.903 "sequence_count": 2048, 00:18:17.903 "buf_count": 2048 00:18:17.903 } 00:18:17.903 } 00:18:17.903 ] 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "subsystem": "bdev", 00:18:17.903 "config": [ 00:18:17.903 { 00:18:17.903 "method": "bdev_set_options", 00:18:17.903 "params": { 00:18:17.903 "bdev_io_pool_size": 65535, 00:18:17.903 "bdev_io_cache_size": 256, 00:18:17.903 "bdev_auto_examine": true, 00:18:17.903 "iobuf_small_cache_size": 128, 00:18:17.903 "iobuf_large_cache_size": 16 00:18:17.903 } 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "method": "bdev_raid_set_options", 00:18:17.903 "params": { 00:18:17.903 "process_window_size_kb": 1024 00:18:17.903 } 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "method": "bdev_iscsi_set_options", 00:18:17.903 "params": { 00:18:17.903 "timeout_sec": 30 00:18:17.903 } 00:18:17.903 }, 00:18:17.903 { 00:18:17.903 "method": "bdev_nvme_set_options", 00:18:17.903 "params": { 00:18:17.903 "action_on_timeout": "none", 00:18:17.903 "timeout_us": 0, 00:18:17.903 "timeout_admin_us": 0, 00:18:17.903 "keep_alive_timeout_ms": 10000, 00:18:17.903 "arbitration_burst": 0, 00:18:17.903 "low_priority_weight": 0, 00:18:17.903 "medium_priority_weight": 0, 00:18:17.903 "high_priority_weight": 0, 00:18:17.903 "nvme_adminq_poll_period_us": 10000, 00:18:17.903 "nvme_ioq_poll_period_us": 0, 00:18:17.903 "io_queue_requests": 512, 00:18:17.903 "delay_cmd_submit": true, 00:18:17.903 "transport_retry_count": 4, 00:18:17.903 "bdev_retry_count": 3, 00:18:17.903 "transport_ack_timeout": 0, 00:18:17.903 "ctrlr_loss_timeout_sec": 0, 00:18:17.903 "reconnect_delay_sec": 0, 00:18:17.903 "fast_io_fail_timeout_sec": 0, 00:18:17.903 "disable_auto_failback": false, 00:18:17.903 "generate_uuids": false, 00:18:17.903 "transport_tos": 0, 00:18:17.903 "nvme_error_stat": false, 00:18:17.903 "rdma_srq_size": 0, 00:18:17.903 "io_path_stat": false, 00:18:17.903 "allow_accel_sequence": false, 00:18:17.903 "rdma_max_cq_size": 0, 00:18:17.903 "rdma_cm_event_timeout_ms": 0, 00:18:17.903 "dhchap_digests": [ 00:18:17.903 "sha256", 00:18:17.903 "sha384", 00:18:17.903 "sha512" 00:18:17.903 ], 00:18:17.903 "dhchap_dhgroups": [ 00:18:17.903 "null", 00:18:17.903 "ffdhe2048", 00:18:17.904 "ffdhe3072", 00:18:17.904 "ffdhe4096", 00:18:17.904 "ffdhe6144", 00:18:17.904 "ffdhe8192" 00:18:17.904 ] 00:18:17.904 } 00:18:17.904 }, 00:18:17.904 { 00:18:17.904 "method": "bdev_nvme_attach_controller", 00:18:17.904 "params": { 00:18:17.904 "name": "TLSTEST", 00:18:17.904 "trtype": "TCP", 00:18:17.904 "adrfam": "IPv4", 00:18:17.904 "traddr": "10.0.0.2", 00:18:17.904 "trsvcid": "4420", 00:18:17.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.904 "prchk_reftag": false, 00:18:17.904 "prchk_guard": false, 00:18:17.904 "ctrlr_loss_timeout_sec": 0, 00:18:17.904 "reconnect_delay_sec": 0, 00:18:17.904 "fast_io_fail_timeout_sec": 0, 00:18:17.904 "psk": "/tmp/tmp.buU8qafjEd", 00:18:17.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.904 "hdgst": false, 00:18:17.904 "ddgst": false 00:18:17.904 } 00:18:17.904 }, 00:18:17.904 { 00:18:17.904 "method": "bdev_nvme_set_hotplug", 00:18:17.904 "params": { 00:18:17.904 "period_us": 100000, 00:18:17.904 "enable": false 00:18:17.904 } 00:18:17.904 }, 00:18:17.904 { 00:18:17.904 "method": "bdev_wait_for_examine" 00:18:17.904 } 00:18:17.904 ] 00:18:17.904 }, 00:18:17.904 { 00:18:17.904 "subsystem": "nbd", 00:18:17.904 "config": [] 00:18:17.904 } 00:18:17.904 ] 00:18:17.904 }' 00:18:17.904 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.904 16:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.161 [2024-07-15 16:34:57.502701] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:18.161 [2024-07-15 16:34:57.502806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540562 ] 00:18:18.161 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.161 [2024-07-15 16:34:57.560678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.161 [2024-07-15 16:34:57.665512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.420 [2024-07-15 16:34:57.832287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.420 [2024-07-15 16:34:57.832397] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.987 16:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.987 16:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.987 16:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.245 Running I/O for 10 seconds... 00:18:29.261 00:18:29.261 Latency(us) 00:18:29.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.261 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.261 Verification LBA range: start 0x0 length 0x2000 00:18:29.261 TLSTESTn1 : 10.05 2500.16 9.77 0.00 0.00 51055.11 6602.15 78449.02 00:18:29.261 =================================================================================================================== 00:18:29.261 Total : 2500.16 9.77 0.00 0.00 51055.11 6602.15 78449.02 00:18:29.261 0 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1540562 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1540562 ']' 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1540562 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1540562 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1540562' 00:18:29.261 killing process with pid 1540562 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1540562 00:18:29.261 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.261 00:18:29.261 Latency(us) 00:18:29.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.261 =================================================================================================================== 00:18:29.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.261 [2024-07-15 16:35:08.718189] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.261 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1540562 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1540412 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1540412 ']' 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1540412 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.520 16:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1540412 00:18:29.520 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:29.520 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:29.520 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1540412' 00:18:29.520 killing process with pid 1540412 00:18:29.520 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1540412 00:18:29.520 [2024-07-15 16:35:09.015076] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.520 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1540412 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1542025 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1542025 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1542025 ']' 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.778 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.778 [2024-07-15 16:35:09.369437] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:29.778 [2024-07-15 16:35:09.369521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.036 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.036 [2024-07-15 16:35:09.444681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.036 [2024-07-15 16:35:09.562086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.036 [2024-07-15 16:35:09.562161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.036 [2024-07-15 16:35:09.562177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.036 [2024-07-15 16:35:09.562191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.036 [2024-07-15 16:35:09.562203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.036 [2024-07-15 16:35:09.562234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.buU8qafjEd 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.buU8qafjEd 00:18:30.294 16:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.552 [2024-07-15 16:35:09.938025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.552 16:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.810 16:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.068 [2024-07-15 16:35:10.443398] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.068 [2024-07-15 16:35:10.443660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.068 16:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.325 malloc0 00:18:31.325 16:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.583 16:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.buU8qafjEd 00:18:31.583 [2024-07-15 16:35:11.173426] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1542192 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1542192 /var/tmp/bdevperf.sock 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1542192 ']' 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.842 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.842 [2024-07-15 16:35:11.236690] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:31.842 [2024-07-15 16:35:11.236772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542192 ] 00:18:31.842 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.842 [2024-07-15 16:35:11.297780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.842 [2024-07-15 16:35:11.417573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.099 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.099 16:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.099 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.buU8qafjEd 00:18:32.357 16:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.617 [2024-07-15 16:35:11.993574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.617 nvme0n1 00:18:32.617 16:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.617 Running I/O for 1 seconds... 00:18:33.994 00:18:33.994 Latency(us) 00:18:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.994 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.994 Verification LBA range: start 0x0 length 0x2000 00:18:33.994 nvme0n1 : 1.05 2390.52 9.34 0.00 0.00 52433.12 6699.24 83109.36 00:18:33.994 =================================================================================================================== 00:18:33.994 Total : 2390.52 9.34 0.00 0.00 52433.12 6699.24 83109.36 00:18:33.994 0 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1542192 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1542192 ']' 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1542192 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542192 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542192' 00:18:33.994 killing process with pid 1542192 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1542192 00:18:33.994 Received shutdown signal, test time was about 1.000000 seconds 00:18:33.994 00:18:33.994 Latency(us) 00:18:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.994 =================================================================================================================== 00:18:33.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1542192 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1542025 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1542025 ']' 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1542025 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542025 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.994 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.995 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542025' 00:18:33.995 killing process with pid 1542025 00:18:33.995 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1542025 00:18:33.995 [2024-07-15 16:35:13.550284] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:33.995 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1542025 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1542587 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1542587 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1542587 ']' 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.253 16:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.511 [2024-07-15 16:35:13.884606] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:34.512 [2024-07-15 16:35:13.884706] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.512 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.512 [2024-07-15 16:35:13.952422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.512 [2024-07-15 16:35:14.063736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.512 [2024-07-15 16:35:14.063802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.512 [2024-07-15 16:35:14.063825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.512 [2024-07-15 16:35:14.063839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.512 [2024-07-15 16:35:14.063859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.512 [2024-07-15 16:35:14.063909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.447 [2024-07-15 16:35:14.880783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.447 malloc0 00:18:35.447 [2024-07-15 16:35:14.912322] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.447 [2024-07-15 16:35:14.912580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1542738 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1542738 /var/tmp/bdevperf.sock 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1542738 ']' 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.447 16:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.447 [2024-07-15 16:35:14.985765] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:35.447 [2024-07-15 16:35:14.985853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542738 ] 00:18:35.447 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.447 [2024-07-15 16:35:15.043770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.705 [2024-07-15 16:35:15.153915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.705 16:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.705 16:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:35.705 16:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.buU8qafjEd 00:18:35.962 16:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:36.221 [2024-07-15 16:35:15.783955] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.481 nvme0n1 00:18:36.481 16:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.481 Running I/O for 1 seconds... 00:18:37.862 00:18:37.862 Latency(us) 00:18:37.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.862 Verification LBA range: start 0x0 length 0x2000 00:18:37.862 nvme0n1 : 1.05 2318.01 9.05 0.00 0.00 54083.34 10582.85 81555.91 00:18:37.862 =================================================================================================================== 00:18:37.862 Total : 2318.01 9.05 0.00 0.00 54083.34 10582.85 81555.91 00:18:37.862 0 00:18:37.862 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:37.862 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.862 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.862 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.862 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:37.862 "subsystems": [ 00:18:37.862 { 00:18:37.862 "subsystem": "keyring", 00:18:37.862 "config": [ 00:18:37.862 { 00:18:37.862 "method": "keyring_file_add_key", 00:18:37.862 "params": { 00:18:37.862 "name": "key0", 00:18:37.862 "path": "/tmp/tmp.buU8qafjEd" 00:18:37.862 } 00:18:37.862 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "iobuf", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "iobuf_set_options", 00:18:37.863 "params": { 00:18:37.863 "small_pool_count": 8192, 00:18:37.863 "large_pool_count": 1024, 00:18:37.863 "small_bufsize": 8192, 00:18:37.863 "large_bufsize": 135168 00:18:37.863 } 00:18:37.863 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "sock", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "sock_set_default_impl", 00:18:37.863 "params": { 00:18:37.863 "impl_name": "posix" 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "sock_impl_set_options", 00:18:37.863 "params": { 00:18:37.863 "impl_name": "ssl", 00:18:37.863 "recv_buf_size": 4096, 00:18:37.863 "send_buf_size": 4096, 00:18:37.863 "enable_recv_pipe": true, 00:18:37.863 "enable_quickack": false, 00:18:37.863 "enable_placement_id": 0, 00:18:37.863 "enable_zerocopy_send_server": true, 00:18:37.863 "enable_zerocopy_send_client": false, 00:18:37.863 "zerocopy_threshold": 0, 00:18:37.863 "tls_version": 0, 00:18:37.863 "enable_ktls": false 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "sock_impl_set_options", 00:18:37.863 "params": { 00:18:37.863 "impl_name": "posix", 00:18:37.863 "recv_buf_size": 2097152, 00:18:37.863 "send_buf_size": 2097152, 00:18:37.863 "enable_recv_pipe": true, 00:18:37.863 "enable_quickack": false, 00:18:37.863 "enable_placement_id": 0, 00:18:37.863 "enable_zerocopy_send_server": true, 00:18:37.863 "enable_zerocopy_send_client": false, 00:18:37.863 "zerocopy_threshold": 0, 00:18:37.863 "tls_version": 0, 00:18:37.863 "enable_ktls": false 00:18:37.863 } 00:18:37.863 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "vmd", 00:18:37.863 "config": [] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "accel", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "accel_set_options", 00:18:37.863 "params": { 00:18:37.863 "small_cache_size": 128, 00:18:37.863 "large_cache_size": 16, 00:18:37.863 "task_count": 2048, 00:18:37.863 "sequence_count": 2048, 00:18:37.863 "buf_count": 2048 00:18:37.863 } 00:18:37.863 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "bdev", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "bdev_set_options", 00:18:37.863 "params": { 00:18:37.863 "bdev_io_pool_size": 65535, 00:18:37.863 "bdev_io_cache_size": 256, 00:18:37.863 "bdev_auto_examine": true, 00:18:37.863 "iobuf_small_cache_size": 128, 00:18:37.863 "iobuf_large_cache_size": 16 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_raid_set_options", 00:18:37.863 "params": { 00:18:37.863 "process_window_size_kb": 1024 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_iscsi_set_options", 00:18:37.863 "params": { 00:18:37.863 "timeout_sec": 30 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_nvme_set_options", 00:18:37.863 "params": { 00:18:37.863 "action_on_timeout": "none", 00:18:37.863 "timeout_us": 0, 00:18:37.863 "timeout_admin_us": 0, 00:18:37.863 "keep_alive_timeout_ms": 10000, 00:18:37.863 "arbitration_burst": 0, 00:18:37.863 "low_priority_weight": 0, 00:18:37.863 "medium_priority_weight": 0, 00:18:37.863 "high_priority_weight": 0, 00:18:37.863 "nvme_adminq_poll_period_us": 10000, 00:18:37.863 "nvme_ioq_poll_period_us": 0, 00:18:37.863 "io_queue_requests": 0, 00:18:37.863 "delay_cmd_submit": true, 00:18:37.863 "transport_retry_count": 4, 00:18:37.863 "bdev_retry_count": 3, 00:18:37.863 "transport_ack_timeout": 0, 00:18:37.863 "ctrlr_loss_timeout_sec": 0, 00:18:37.863 "reconnect_delay_sec": 0, 00:18:37.863 "fast_io_fail_timeout_sec": 0, 00:18:37.863 "disable_auto_failback": false, 00:18:37.863 "generate_uuids": false, 00:18:37.863 "transport_tos": 0, 00:18:37.863 "nvme_error_stat": false, 00:18:37.863 "rdma_srq_size": 0, 00:18:37.863 "io_path_stat": false, 00:18:37.863 "allow_accel_sequence": false, 00:18:37.863 "rdma_max_cq_size": 0, 00:18:37.863 "rdma_cm_event_timeout_ms": 0, 00:18:37.863 "dhchap_digests": [ 00:18:37.863 "sha256", 00:18:37.863 "sha384", 00:18:37.863 "sha512" 00:18:37.863 ], 00:18:37.863 "dhchap_dhgroups": [ 00:18:37.863 "null", 00:18:37.863 "ffdhe2048", 00:18:37.863 "ffdhe3072", 00:18:37.863 "ffdhe4096", 00:18:37.863 "ffdhe6144", 00:18:37.863 "ffdhe8192" 00:18:37.863 ] 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_nvme_set_hotplug", 00:18:37.863 "params": { 00:18:37.863 "period_us": 100000, 00:18:37.863 "enable": false 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_malloc_create", 00:18:37.863 "params": { 00:18:37.863 "name": "malloc0", 00:18:37.863 "num_blocks": 8192, 00:18:37.863 "block_size": 4096, 00:18:37.863 "physical_block_size": 4096, 00:18:37.863 "uuid": "7b08b900-311a-4854-9c55-ae4cacbadc5b", 00:18:37.863 "optimal_io_boundary": 0 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "bdev_wait_for_examine" 00:18:37.863 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "nbd", 00:18:37.863 "config": [] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "scheduler", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "framework_set_scheduler", 00:18:37.863 "params": { 00:18:37.863 "name": "static" 00:18:37.863 } 00:18:37.863 } 00:18:37.863 ] 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "subsystem": "nvmf", 00:18:37.863 "config": [ 00:18:37.863 { 00:18:37.863 "method": "nvmf_set_config", 00:18:37.863 "params": { 00:18:37.863 "discovery_filter": "match_any", 00:18:37.863 "admin_cmd_passthru": { 00:18:37.863 "identify_ctrlr": false 00:18:37.863 } 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "nvmf_set_max_subsystems", 00:18:37.863 "params": { 00:18:37.863 "max_subsystems": 1024 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "nvmf_set_crdt", 00:18:37.863 "params": { 00:18:37.863 "crdt1": 0, 00:18:37.863 "crdt2": 0, 00:18:37.863 "crdt3": 0 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "nvmf_create_transport", 00:18:37.863 "params": { 00:18:37.863 "trtype": "TCP", 00:18:37.863 "max_queue_depth": 128, 00:18:37.863 "max_io_qpairs_per_ctrlr": 127, 00:18:37.863 "in_capsule_data_size": 4096, 00:18:37.863 "max_io_size": 131072, 00:18:37.863 "io_unit_size": 131072, 00:18:37.863 "max_aq_depth": 128, 00:18:37.863 "num_shared_buffers": 511, 00:18:37.863 "buf_cache_size": 4294967295, 00:18:37.863 "dif_insert_or_strip": false, 00:18:37.863 "zcopy": false, 00:18:37.863 "c2h_success": false, 00:18:37.863 "sock_priority": 0, 00:18:37.863 "abort_timeout_sec": 1, 00:18:37.863 "ack_timeout": 0, 00:18:37.863 "data_wr_pool_size": 0 00:18:37.863 } 00:18:37.863 }, 00:18:37.863 { 00:18:37.863 "method": "nvmf_create_subsystem", 00:18:37.863 "params": { 00:18:37.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.863 "allow_any_host": false, 00:18:37.863 "serial_number": "00000000000000000000", 00:18:37.863 "model_number": "SPDK bdev Controller", 00:18:37.863 "max_namespaces": 32, 00:18:37.863 "min_cntlid": 1, 00:18:37.863 "max_cntlid": 65519, 00:18:37.863 "ana_reporting": false 00:18:37.864 } 00:18:37.864 }, 00:18:37.864 { 00:18:37.864 "method": "nvmf_subsystem_add_host", 00:18:37.864 "params": { 00:18:37.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.864 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.864 "psk": "key0" 00:18:37.864 } 00:18:37.864 }, 00:18:37.864 { 00:18:37.864 "method": "nvmf_subsystem_add_ns", 00:18:37.864 "params": { 00:18:37.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.864 "namespace": { 00:18:37.864 "nsid": 1, 00:18:37.864 "bdev_name": "malloc0", 00:18:37.864 "nguid": "7B08B900311A48549C55AE4CACBADC5B", 00:18:37.864 "uuid": "7b08b900-311a-4854-9c55-ae4cacbadc5b", 00:18:37.864 "no_auto_visible": false 00:18:37.864 } 00:18:37.864 } 00:18:37.864 }, 00:18:37.864 { 00:18:37.864 "method": "nvmf_subsystem_add_listener", 00:18:37.864 "params": { 00:18:37.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.864 "listen_address": { 00:18:37.864 "trtype": "TCP", 00:18:37.864 "adrfam": "IPv4", 00:18:37.864 "traddr": "10.0.0.2", 00:18:37.864 "trsvcid": "4420" 00:18:37.864 }, 00:18:37.864 "secure_channel": true 00:18:37.864 } 00:18:37.864 } 00:18:37.864 ] 00:18:37.864 } 00:18:37.864 ] 00:18:37.864 }' 00:18:37.864 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:38.124 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:38.124 "subsystems": [ 00:18:38.124 { 00:18:38.124 "subsystem": "keyring", 00:18:38.124 "config": [ 00:18:38.124 { 00:18:38.124 "method": "keyring_file_add_key", 00:18:38.124 "params": { 00:18:38.124 "name": "key0", 00:18:38.124 "path": "/tmp/tmp.buU8qafjEd" 00:18:38.124 } 00:18:38.124 } 00:18:38.124 ] 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "subsystem": "iobuf", 00:18:38.124 "config": [ 00:18:38.124 { 00:18:38.124 "method": "iobuf_set_options", 00:18:38.124 "params": { 00:18:38.124 "small_pool_count": 8192, 00:18:38.124 "large_pool_count": 1024, 00:18:38.124 "small_bufsize": 8192, 00:18:38.124 "large_bufsize": 135168 00:18:38.124 } 00:18:38.124 } 00:18:38.124 ] 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "subsystem": "sock", 00:18:38.124 "config": [ 00:18:38.124 { 00:18:38.124 "method": "sock_set_default_impl", 00:18:38.124 "params": { 00:18:38.124 "impl_name": "posix" 00:18:38.124 } 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "method": "sock_impl_set_options", 00:18:38.124 "params": { 00:18:38.124 "impl_name": "ssl", 00:18:38.124 "recv_buf_size": 4096, 00:18:38.124 "send_buf_size": 4096, 00:18:38.124 "enable_recv_pipe": true, 00:18:38.124 "enable_quickack": false, 00:18:38.124 "enable_placement_id": 0, 00:18:38.124 "enable_zerocopy_send_server": true, 00:18:38.124 "enable_zerocopy_send_client": false, 00:18:38.124 "zerocopy_threshold": 0, 00:18:38.124 "tls_version": 0, 00:18:38.124 "enable_ktls": false 00:18:38.124 } 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "method": "sock_impl_set_options", 00:18:38.124 "params": { 00:18:38.124 "impl_name": "posix", 00:18:38.124 "recv_buf_size": 2097152, 00:18:38.124 "send_buf_size": 2097152, 00:18:38.124 "enable_recv_pipe": true, 00:18:38.124 "enable_quickack": false, 00:18:38.124 "enable_placement_id": 0, 00:18:38.124 "enable_zerocopy_send_server": true, 00:18:38.124 "enable_zerocopy_send_client": false, 00:18:38.124 "zerocopy_threshold": 0, 00:18:38.124 "tls_version": 0, 00:18:38.124 "enable_ktls": false 00:18:38.124 } 00:18:38.124 } 00:18:38.124 ] 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "subsystem": "vmd", 00:18:38.124 "config": [] 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "subsystem": "accel", 00:18:38.124 "config": [ 00:18:38.124 { 00:18:38.124 "method": "accel_set_options", 00:18:38.124 "params": { 00:18:38.124 "small_cache_size": 128, 00:18:38.124 "large_cache_size": 16, 00:18:38.124 "task_count": 2048, 00:18:38.124 "sequence_count": 2048, 00:18:38.124 "buf_count": 2048 00:18:38.124 } 00:18:38.124 } 00:18:38.124 ] 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "subsystem": "bdev", 00:18:38.124 "config": [ 00:18:38.124 { 00:18:38.124 "method": "bdev_set_options", 00:18:38.124 "params": { 00:18:38.124 "bdev_io_pool_size": 65535, 00:18:38.124 "bdev_io_cache_size": 256, 00:18:38.124 "bdev_auto_examine": true, 00:18:38.124 "iobuf_small_cache_size": 128, 00:18:38.124 "iobuf_large_cache_size": 16 00:18:38.124 } 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "method": "bdev_raid_set_options", 00:18:38.124 "params": { 00:18:38.124 "process_window_size_kb": 1024 00:18:38.124 } 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "method": "bdev_iscsi_set_options", 00:18:38.124 "params": { 00:18:38.124 "timeout_sec": 30 00:18:38.124 } 00:18:38.124 }, 00:18:38.124 { 00:18:38.124 "method": "bdev_nvme_set_options", 00:18:38.124 "params": { 00:18:38.124 "action_on_timeout": "none", 00:18:38.124 "timeout_us": 0, 00:18:38.124 "timeout_admin_us": 0, 00:18:38.124 "keep_alive_timeout_ms": 10000, 00:18:38.124 "arbitration_burst": 0, 00:18:38.124 "low_priority_weight": 0, 00:18:38.124 "medium_priority_weight": 0, 00:18:38.124 "high_priority_weight": 0, 00:18:38.124 "nvme_adminq_poll_period_us": 10000, 00:18:38.124 "nvme_ioq_poll_period_us": 0, 00:18:38.125 "io_queue_requests": 512, 00:18:38.125 "delay_cmd_submit": true, 00:18:38.125 "transport_retry_count": 4, 00:18:38.125 "bdev_retry_count": 3, 00:18:38.125 "transport_ack_timeout": 0, 00:18:38.125 "ctrlr_loss_timeout_sec": 0, 00:18:38.125 "reconnect_delay_sec": 0, 00:18:38.125 "fast_io_fail_timeout_sec": 0, 00:18:38.125 "disable_auto_failback": false, 00:18:38.125 "generate_uuids": false, 00:18:38.125 "transport_tos": 0, 00:18:38.125 "nvme_error_stat": false, 00:18:38.125 "rdma_srq_size": 0, 00:18:38.125 "io_path_stat": false, 00:18:38.125 "allow_accel_sequence": false, 00:18:38.125 "rdma_max_cq_size": 0, 00:18:38.125 "rdma_cm_event_timeout_ms": 0, 00:18:38.125 "dhchap_digests": [ 00:18:38.125 "sha256", 00:18:38.125 "sha384", 00:18:38.125 "sha512" 00:18:38.125 ], 00:18:38.125 "dhchap_dhgroups": [ 00:18:38.125 "null", 00:18:38.125 "ffdhe2048", 00:18:38.125 "ffdhe3072", 00:18:38.125 "ffdhe4096", 00:18:38.125 "ffdhe6144", 00:18:38.125 "ffdhe8192" 00:18:38.125 ] 00:18:38.125 } 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "method": "bdev_nvme_attach_controller", 00:18:38.125 "params": { 00:18:38.125 "name": "nvme0", 00:18:38.125 "trtype": "TCP", 00:18:38.125 "adrfam": "IPv4", 00:18:38.125 "traddr": "10.0.0.2", 00:18:38.125 "trsvcid": "4420", 00:18:38.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.125 "prchk_reftag": false, 00:18:38.125 "prchk_guard": false, 00:18:38.125 "ctrlr_loss_timeout_sec": 0, 00:18:38.125 "reconnect_delay_sec": 0, 00:18:38.125 "fast_io_fail_timeout_sec": 0, 00:18:38.125 "psk": "key0", 00:18:38.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.125 "hdgst": false, 00:18:38.125 "ddgst": false 00:18:38.125 } 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "method": "bdev_nvme_set_hotplug", 00:18:38.125 "params": { 00:18:38.125 "period_us": 100000, 00:18:38.125 "enable": false 00:18:38.125 } 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "method": "bdev_enable_histogram", 00:18:38.125 "params": { 00:18:38.125 "name": "nvme0n1", 00:18:38.125 "enable": true 00:18:38.125 } 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "method": "bdev_wait_for_examine" 00:18:38.125 } 00:18:38.125 ] 00:18:38.125 }, 00:18:38.125 { 00:18:38.125 "subsystem": "nbd", 00:18:38.125 "config": [] 00:18:38.125 } 00:18:38.125 ] 00:18:38.125 }' 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1542738 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1542738 ']' 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1542738 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542738 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542738' 00:18:38.125 killing process with pid 1542738 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1542738 00:18:38.125 Received shutdown signal, test time was about 1.000000 seconds 00:18:38.125 00:18:38.125 Latency(us) 00:18:38.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.125 =================================================================================================================== 00:18:38.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.125 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1542738 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1542587 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1542587 ']' 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1542587 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542587 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542587' 00:18:38.385 killing process with pid 1542587 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1542587 00:18:38.385 16:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1542587 00:18:38.644 16:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:38.644 16:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.644 16:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:38.644 "subsystems": [ 00:18:38.644 { 00:18:38.644 "subsystem": "keyring", 00:18:38.644 "config": [ 00:18:38.644 { 00:18:38.644 "method": "keyring_file_add_key", 00:18:38.644 "params": { 00:18:38.644 "name": "key0", 00:18:38.644 "path": "/tmp/tmp.buU8qafjEd" 00:18:38.644 } 00:18:38.644 } 00:18:38.644 ] 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "subsystem": "iobuf", 00:18:38.644 "config": [ 00:18:38.644 { 00:18:38.644 "method": "iobuf_set_options", 00:18:38.644 "params": { 00:18:38.644 "small_pool_count": 8192, 00:18:38.644 "large_pool_count": 1024, 00:18:38.644 "small_bufsize": 8192, 00:18:38.644 "large_bufsize": 135168 00:18:38.644 } 00:18:38.644 } 00:18:38.644 ] 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "subsystem": "sock", 00:18:38.644 "config": [ 00:18:38.644 { 00:18:38.644 "method": "sock_set_default_impl", 00:18:38.644 "params": { 00:18:38.644 "impl_name": "posix" 00:18:38.644 } 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "method": "sock_impl_set_options", 00:18:38.644 "params": { 00:18:38.644 "impl_name": "ssl", 00:18:38.644 "recv_buf_size": 4096, 00:18:38.644 "send_buf_size": 4096, 00:18:38.644 "enable_recv_pipe": true, 00:18:38.644 "enable_quickack": false, 00:18:38.644 "enable_placement_id": 0, 00:18:38.644 "enable_zerocopy_send_server": true, 00:18:38.644 "enable_zerocopy_send_client": false, 00:18:38.644 "zerocopy_threshold": 0, 00:18:38.644 "tls_version": 0, 00:18:38.644 "enable_ktls": false 00:18:38.644 } 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "method": "sock_impl_set_options", 00:18:38.644 "params": { 00:18:38.644 "impl_name": "posix", 00:18:38.644 "recv_buf_size": 2097152, 00:18:38.644 "send_buf_size": 2097152, 00:18:38.644 "enable_recv_pipe": true, 00:18:38.644 "enable_quickack": false, 00:18:38.644 "enable_placement_id": 0, 00:18:38.644 "enable_zerocopy_send_server": true, 00:18:38.644 "enable_zerocopy_send_client": false, 00:18:38.644 "zerocopy_threshold": 0, 00:18:38.644 "tls_version": 0, 00:18:38.644 "enable_ktls": false 00:18:38.644 } 00:18:38.644 } 00:18:38.644 ] 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "subsystem": "vmd", 00:18:38.644 "config": [] 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "subsystem": "accel", 00:18:38.644 "config": [ 00:18:38.644 { 00:18:38.644 "method": "accel_set_options", 00:18:38.644 "params": { 00:18:38.644 "small_cache_size": 128, 00:18:38.644 "large_cache_size": 16, 00:18:38.644 "task_count": 2048, 00:18:38.644 "sequence_count": 2048, 00:18:38.644 "buf_count": 2048 00:18:38.644 } 00:18:38.644 } 00:18:38.644 ] 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "subsystem": "bdev", 00:18:38.644 "config": [ 00:18:38.644 { 00:18:38.644 "method": "bdev_set_options", 00:18:38.644 "params": { 00:18:38.644 "bdev_io_pool_size": 65535, 00:18:38.644 "bdev_io_cache_size": 256, 00:18:38.644 "bdev_auto_examine": true, 00:18:38.644 "iobuf_small_cache_size": 128, 00:18:38.644 "iobuf_large_cache_size": 16 00:18:38.644 } 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "method": "bdev_raid_set_options", 00:18:38.644 "params": { 00:18:38.644 "process_window_size_kb": 1024 00:18:38.644 } 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "method": "bdev_iscsi_set_options", 00:18:38.644 "params": { 00:18:38.644 "timeout_sec": 30 00:18:38.644 } 00:18:38.644 }, 00:18:38.644 { 00:18:38.644 "method": "bdev_nvme_set_options", 00:18:38.644 "params": { 00:18:38.644 "action_on_timeout": "none", 00:18:38.644 "timeout_us": 0, 00:18:38.644 "timeout_admin_us": 0, 00:18:38.644 "keep_alive_timeout_ms": 10000, 00:18:38.644 "arbitration_burst": 0, 00:18:38.644 "low_priority_weight": 0, 00:18:38.644 "medium_priority_weight": 0, 00:18:38.644 "high_priority_weight": 0, 00:18:38.644 "nvme_adminq_poll_period_us": 10000, 00:18:38.644 "nvme_ioq_poll_period_us": 0, 00:18:38.644 "io_queue_requests": 0, 00:18:38.644 "delay_cmd_submit": true, 00:18:38.644 "transport_retry_count": 4, 00:18:38.644 "bdev_retry_count": 3, 00:18:38.644 "transport_ack_timeout": 0, 00:18:38.644 "ctrlr_loss_timeout_sec": 0, 00:18:38.644 "reconnect_delay_sec": 0, 00:18:38.644 "fast_io_fail_timeout_sec": 0, 00:18:38.645 "disable_auto_failback": false, 00:18:38.645 "generate_uuids": false, 00:18:38.645 "transport_tos": 0, 00:18:38.645 "nvme_error_stat": false, 00:18:38.645 "rdma_srq_size": 0, 00:18:38.645 "io_path_stat": false, 00:18:38.645 "allow_accel_sequence": false, 00:18:38.645 "rdma_max_cq_size": 0, 00:18:38.645 "rdma_cm_event_timeout_ms": 0, 00:18:38.645 "dhchap_digests": [ 00:18:38.645 "sha256", 00:18:38.645 "sha384", 00:18:38.645 "sha512" 00:18:38.645 ], 00:18:38.645 "dhchap_dhgroups": [ 00:18:38.645 "null", 00:18:38.645 "ffdhe2048", 00:18:38.645 "ffdhe3072", 00:18:38.645 "ffdhe4096", 00:18:38.645 "ffdhe6144", 00:18:38.645 "ffdhe8192" 00:18:38.645 ] 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "bdev_nvme_set_hotplug", 00:18:38.645 "params": { 00:18:38.645 "period_us": 100000, 00:18:38.645 "enable": false 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "bdev_malloc_create", 00:18:38.645 "params": { 00:18:38.645 "name": "malloc0", 00:18:38.645 "num_blocks": 8192, 00:18:38.645 "block_size": 4096, 00:18:38.645 "physical_block_size": 4096, 00:18:38.645 "uuid": "7b08b900-311a-4854-9c55-ae4cacbadc5b", 00:18:38.645 "optimal_io_boundary": 0 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "bdev_wait_for_examine" 00:18:38.645 } 00:18:38.645 ] 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "subsystem": "nbd", 00:18:38.645 "config": [] 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "subsystem": "scheduler", 00:18:38.645 "config": [ 00:18:38.645 { 00:18:38.645 "method": "framework_set_scheduler", 00:18:38.645 "params": { 00:18:38.645 "name": "static" 00:18:38.645 } 00:18:38.645 } 00:18:38.645 ] 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "subsystem": "nvmf", 00:18:38.645 "config": [ 00:18:38.645 { 00:18:38.645 "method": "nvmf_set_config", 00:18:38.645 "params": { 00:18:38.645 "discovery_filter": "match_any", 00:18:38.645 "admin_cmd_passthru": { 00:18:38.645 "identify_ctrlr": false 00:18:38.645 } 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_set_max_subsystems", 00:18:38.645 "params": { 00:18:38.645 "max_subsystems": 1024 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_set_crdt", 00:18:38.645 "params": { 00:18:38.645 "crdt1": 0, 00:18:38.645 "crdt2": 0, 00:18:38.645 "crdt3": 0 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_create_transport", 00:18:38.645 "params": { 00:18:38.645 "trtype": "TCP", 00:18:38.645 "max_queue_depth": 128, 00:18:38.645 "max_io_qpairs_per_ctrlr": 127, 00:18:38.645 "in_capsule_data_size": 4096, 00:18:38.645 "max_io_size": 131072, 00:18:38.645 "io_unit_size": 131072, 00:18:38.645 "max_aq_depth": 128, 00:18:38.645 "num_shared_buffers": 511, 00:18:38.645 "buf_cache_size": 4294967295, 00:18:38.645 "dif_insert_or_strip": false, 00:18:38.645 "zcopy": false, 00:18:38.645 "c2h_success": false, 00:18:38.645 "sock_priority": 0, 00:18:38.645 "abort_timeout_sec": 1, 00:18:38.645 "ack_timeout": 0, 00:18:38.645 "data_wr_pool_size": 0 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_create_subsystem", 00:18:38.645 "params": { 00:18:38.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.645 "allow_any_host": false, 00:18:38.645 "serial_number": "00000000000000000000", 00:18:38.645 "model_number": "SPDK bdev Controller", 00:18:38.645 "max_namespaces": 32, 00:18:38.645 "min_cntlid": 1, 00:18:38.645 "max_cntlid": 65519, 00:18:38.645 "ana_reporting": false 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_subsystem_add_host", 00:18:38.645 "params": { 00:18:38.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.645 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.645 "psk": "key0" 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_subsystem_add_ns", 00:18:38.645 "params": { 00:18:38.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.645 "namespace": { 00:18:38.645 "nsid": 1, 00:18:38.645 "bdev_name": "malloc0", 00:18:38.645 "nguid": "7B08B900311A48549C55AE4CACBADC5B", 00:18:38.645 "uuid": "7b08b900-311a-4854-9c55-ae4cacbadc5b", 00:18:38.645 "no_auto_visible": false 00:18:38.645 } 00:18:38.645 } 00:18:38.645 }, 00:18:38.645 { 00:18:38.645 "method": "nvmf_subsystem_add_listener", 00:18:38.645 "params": { 00:18:38.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.645 "listen_address": { 00:18:38.645 "trtype": "TCP", 00:18:38.645 "adrfam": "IPv4", 00:18:38.645 "traddr": "10.0.0.2", 00:18:38.645 "trsvcid": "4420" 00:18:38.645 }, 00:18:38.645 "secure_channel": true 00:18:38.645 } 00:18:38.645 } 00:18:38.645 ] 00:18:38.645 } 00:18:38.645 ] 00:18:38.645 }' 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1543037 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1543037 00:18:38.645 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1543037 ']' 00:18:38.646 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.646 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.646 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.646 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.646 16:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.646 [2024-07-15 16:35:18.201715] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:38.646 [2024-07-15 16:35:18.201811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.646 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.903 [2024-07-15 16:35:18.268912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.903 [2024-07-15 16:35:18.374488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.903 [2024-07-15 16:35:18.374574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.903 [2024-07-15 16:35:18.374586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.903 [2024-07-15 16:35:18.374597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.903 [2024-07-15 16:35:18.374607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.903 [2024-07-15 16:35:18.374678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.160 [2024-07-15 16:35:18.613636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.160 [2024-07-15 16:35:18.645649] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.160 [2024-07-15 16:35:18.654088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1543190 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1543190 /var/tmp/bdevperf.sock 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1543190 ']' 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.733 16:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:39.733 "subsystems": [ 00:18:39.733 { 00:18:39.733 "subsystem": "keyring", 00:18:39.733 "config": [ 00:18:39.733 { 00:18:39.733 "method": "keyring_file_add_key", 00:18:39.733 "params": { 00:18:39.733 "name": "key0", 00:18:39.733 "path": "/tmp/tmp.buU8qafjEd" 00:18:39.733 } 00:18:39.733 } 00:18:39.733 ] 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "subsystem": "iobuf", 00:18:39.733 "config": [ 00:18:39.733 { 00:18:39.733 "method": "iobuf_set_options", 00:18:39.733 "params": { 00:18:39.733 "small_pool_count": 8192, 00:18:39.733 "large_pool_count": 1024, 00:18:39.733 "small_bufsize": 8192, 00:18:39.733 "large_bufsize": 135168 00:18:39.733 } 00:18:39.733 } 00:18:39.733 ] 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "subsystem": "sock", 00:18:39.733 "config": [ 00:18:39.733 { 00:18:39.733 "method": "sock_set_default_impl", 00:18:39.733 "params": { 00:18:39.733 "impl_name": "posix" 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "sock_impl_set_options", 00:18:39.733 "params": { 00:18:39.733 "impl_name": "ssl", 00:18:39.733 "recv_buf_size": 4096, 00:18:39.733 "send_buf_size": 4096, 00:18:39.733 "enable_recv_pipe": true, 00:18:39.733 "enable_quickack": false, 00:18:39.733 "enable_placement_id": 0, 00:18:39.733 "enable_zerocopy_send_server": true, 00:18:39.733 "enable_zerocopy_send_client": false, 00:18:39.733 "zerocopy_threshold": 0, 00:18:39.733 "tls_version": 0, 00:18:39.733 "enable_ktls": false 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "sock_impl_set_options", 00:18:39.733 "params": { 00:18:39.733 "impl_name": "posix", 00:18:39.733 "recv_buf_size": 2097152, 00:18:39.733 "send_buf_size": 2097152, 00:18:39.733 "enable_recv_pipe": true, 00:18:39.733 "enable_quickack": false, 00:18:39.733 "enable_placement_id": 0, 00:18:39.733 "enable_zerocopy_send_server": true, 00:18:39.733 "enable_zerocopy_send_client": false, 00:18:39.733 "zerocopy_threshold": 0, 00:18:39.733 "tls_version": 0, 00:18:39.733 "enable_ktls": false 00:18:39.733 } 00:18:39.733 } 00:18:39.733 ] 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "subsystem": "vmd", 00:18:39.733 "config": [] 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "subsystem": "accel", 00:18:39.733 "config": [ 00:18:39.733 { 00:18:39.733 "method": "accel_set_options", 00:18:39.733 "params": { 00:18:39.733 "small_cache_size": 128, 00:18:39.733 "large_cache_size": 16, 00:18:39.733 "task_count": 2048, 00:18:39.733 "sequence_count": 2048, 00:18:39.733 "buf_count": 2048 00:18:39.733 } 00:18:39.733 } 00:18:39.733 ] 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "subsystem": "bdev", 00:18:39.733 "config": [ 00:18:39.733 { 00:18:39.733 "method": "bdev_set_options", 00:18:39.733 "params": { 00:18:39.733 "bdev_io_pool_size": 65535, 00:18:39.733 "bdev_io_cache_size": 256, 00:18:39.733 "bdev_auto_examine": true, 00:18:39.733 "iobuf_small_cache_size": 128, 00:18:39.733 "iobuf_large_cache_size": 16 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "bdev_raid_set_options", 00:18:39.733 "params": { 00:18:39.733 "process_window_size_kb": 1024 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "bdev_iscsi_set_options", 00:18:39.733 "params": { 00:18:39.733 "timeout_sec": 30 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "bdev_nvme_set_options", 00:18:39.733 "params": { 00:18:39.733 "action_on_timeout": "none", 00:18:39.733 "timeout_us": 0, 00:18:39.733 "timeout_admin_us": 0, 00:18:39.733 "keep_alive_timeout_ms": 10000, 00:18:39.733 "arbitration_burst": 0, 00:18:39.733 "low_priority_weight": 0, 00:18:39.733 "medium_priority_weight": 0, 00:18:39.733 "high_priority_weight": 0, 00:18:39.733 "nvme_adminq_poll_period_us": 10000, 00:18:39.733 "nvme_ioq_poll_period_us": 0, 00:18:39.733 "io_queue_requests": 512, 00:18:39.733 "delay_cmd_submit": true, 00:18:39.733 "transport_retry_count": 4, 00:18:39.733 "bdev_retry_count": 3, 00:18:39.733 "transport_ack_timeout": 0, 00:18:39.733 "ctrlr_loss_timeout_sec": 0, 00:18:39.733 "reconnect_delay_sec": 0, 00:18:39.733 "fast_io_fail_timeout_sec": 0, 00:18:39.733 "disable_auto_failback": false, 00:18:39.733 "generate_uuids": false, 00:18:39.733 "transport_tos": 0, 00:18:39.733 "nvme_error_stat": false, 00:18:39.733 "rdma_srq_size": 0, 00:18:39.733 "io_path_stat": false, 00:18:39.733 "allow_accel_sequence": false, 00:18:39.733 "rdma_max_cq_size": 0, 00:18:39.733 "rdma_cm_event_timeout_ms": 0, 00:18:39.733 "dhchap_digests": [ 00:18:39.733 "sha256", 00:18:39.733 "sha384", 00:18:39.733 "sha512" 00:18:39.733 ], 00:18:39.733 "dhchap_dhgroups": [ 00:18:39.733 "null", 00:18:39.733 "ffdhe2048", 00:18:39.733 "ffdhe3072", 00:18:39.733 "ffdhe4096", 00:18:39.733 "ffdhe6144", 00:18:39.733 "ffdhe8192" 00:18:39.733 ] 00:18:39.733 } 00:18:39.733 }, 00:18:39.733 { 00:18:39.733 "method": "bdev_nvme_attach_controller", 00:18:39.733 "params": { 00:18:39.733 "name": "nvme0", 00:18:39.733 "trtype": "TCP", 00:18:39.733 "adrfam": "IPv4", 00:18:39.733 "traddr": "10.0.0.2", 00:18:39.733 "trsvcid": "4420", 00:18:39.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.734 "prchk_reftag": false, 00:18:39.734 "prchk_guard": false, 00:18:39.734 "ctrlr_loss_timeout_sec": 0, 00:18:39.734 "reconnect_delay_sec": 0, 00:18:39.734 "fast_io_fail_timeout_sec": 0, 00:18:39.734 "psk": "key0", 00:18:39.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.734 "hdgst": false, 00:18:39.734 "ddgst": false 00:18:39.734 } 00:18:39.734 }, 00:18:39.734 { 00:18:39.734 "method": "bdev_nvme_set_hotplug", 00:18:39.734 "params": { 00:18:39.734 "period_us": 100000, 00:18:39.734 "enable": false 00:18:39.734 } 00:18:39.734 }, 00:18:39.734 { 00:18:39.734 "method": "bdev_enable_histogram", 00:18:39.734 "params": { 00:18:39.734 "name": "nvme0n1", 00:18:39.734 "enable": true 00:18:39.734 } 00:18:39.734 }, 00:18:39.734 { 00:18:39.734 "method": "bdev_wait_for_examine" 00:18:39.734 } 00:18:39.734 ] 00:18:39.734 }, 00:18:39.734 { 00:18:39.734 "subsystem": "nbd", 00:18:39.734 "config": [] 00:18:39.734 } 00:18:39.734 ] 00:18:39.734 }' 00:18:39.734 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.734 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.734 16:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.734 [2024-07-15 16:35:19.222768] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:39.734 [2024-07-15 16:35:19.222861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543190 ] 00:18:39.734 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.734 [2024-07-15 16:35:19.286454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.040 [2024-07-15 16:35:19.406303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.040 [2024-07-15 16:35:19.591062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.607 16:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.607 16:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.607 16:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.607 16:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:40.864 16:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.864 16:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.120 Running I/O for 1 seconds... 00:18:42.051 00:18:42.051 Latency(us) 00:18:42.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.051 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.051 Verification LBA range: start 0x0 length 0x2000 00:18:42.051 nvme0n1 : 1.05 2344.70 9.16 0.00 0.00 53447.52 7136.14 88546.42 00:18:42.051 =================================================================================================================== 00:18:42.051 Total : 2344.70 9.16 0.00 0.00 53447.52 7136.14 88546.42 00:18:42.051 0 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:42.051 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:42.051 nvmf_trace.0 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1543190 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1543190 ']' 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1543190 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543190 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543190' 00:18:42.310 killing process with pid 1543190 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1543190 00:18:42.310 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.310 00:18:42.310 Latency(us) 00:18:42.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.310 =================================================================================================================== 00:18:42.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.310 16:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1543190 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.567 16:35:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.567 rmmod nvme_tcp 00:18:42.567 rmmod nvme_fabrics 00:18:42.567 rmmod nvme_keyring 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1543037 ']' 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1543037 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1543037 ']' 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1543037 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543037 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543037' 00:18:42.567 killing process with pid 1543037 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1543037 00:18:42.567 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1543037 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.826 16:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.356 16:35:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.356 16:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1iNDqcG9mb /tmp/tmp.dbxJf9H7jk /tmp/tmp.buU8qafjEd 00:18:45.356 00:18:45.356 real 1m22.198s 00:18:45.356 user 2m9.911s 00:18:45.356 sys 0m28.545s 00:18:45.356 16:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.356 16:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.356 ************************************ 00:18:45.356 END TEST nvmf_tls 00:18:45.356 ************************************ 00:18:45.356 16:35:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:45.356 16:35:24 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.356 16:35:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:45.356 16:35:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.356 16:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.356 ************************************ 00:18:45.356 START TEST nvmf_fips 00:18:45.356 ************************************ 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:45.356 * Looking for test storage... 00:18:45.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.356 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:45.357 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:45.358 Error setting digest 00:18:45.358 006236E18A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:45.358 006236E18A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.358 16:35:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.257 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:18:47.258 00:18:47.258 --- 10.0.0.2 ping statistics --- 00:18:47.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.258 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:18:47.258 00:18:47.258 --- 10.0.0.1 ping statistics --- 00:18:47.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.258 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.258 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1545543 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1545543 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1545543 ']' 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.259 16:35:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.259 [2024-07-15 16:35:26.781676] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:47.259 [2024-07-15 16:35:26.781785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.259 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.259 [2024-07-15 16:35:26.850682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.517 [2024-07-15 16:35:26.965976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.517 [2024-07-15 16:35:26.966035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.517 [2024-07-15 16:35:26.966060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.517 [2024-07-15 16:35:26.966072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.517 [2024-07-15 16:35:26.966083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.517 [2024-07-15 16:35:26.966129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.450 16:35:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.450 [2024-07-15 16:35:27.998158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.450 [2024-07-15 16:35:28.014127] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.450 [2024-07-15 16:35:28.014367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.450 [2024-07-15 16:35:28.046649] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:48.708 malloc0 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1545702 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1545702 /var/tmp/bdevperf.sock 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1545702 ']' 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.708 16:35:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.708 [2024-07-15 16:35:28.138433] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:48.708 [2024-07-15 16:35:28.138528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545702 ] 00:18:48.708 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.708 [2024-07-15 16:35:28.195426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.708 [2024-07-15 16:35:28.301804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.640 16:35:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.640 16:35:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:49.640 16:35:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:49.897 [2024-07-15 16:35:29.263193] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.898 [2024-07-15 16:35:29.263311] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:49.898 TLSTESTn1 00:18:49.898 16:35:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.898 Running I/O for 10 seconds... 00:19:02.087 00:19:02.088 Latency(us) 00:19:02.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.088 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.088 Verification LBA range: start 0x0 length 0x2000 00:19:02.088 TLSTESTn1 : 10.05 2510.76 9.81 0.00 0.00 50851.20 9611.95 76895.57 00:19:02.088 =================================================================================================================== 00:19:02.088 Total : 2510.76 9.81 0.00 0.00 50851.20 9611.95 76895.57 00:19:02.088 0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.088 nvmf_trace.0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1545702 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1545702 ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1545702 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545702 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545702' 00:19:02.088 killing process with pid 1545702 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1545702 00:19:02.088 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.088 00:19:02.088 Latency(us) 00:19:02.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.088 =================================================================================================================== 00:19:02.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.088 [2024-07-15 16:35:39.636065] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1545702 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.088 rmmod nvme_tcp 00:19:02.088 rmmod nvme_fabrics 00:19:02.088 rmmod nvme_keyring 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1545543 ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1545543 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1545543 ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1545543 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.088 16:35:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545543 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545543' 00:19:02.088 killing process with pid 1545543 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1545543 00:19:02.088 [2024-07-15 16:35:40.002739] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1545543 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.088 16:35:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.025 16:35:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.025 16:35:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:03.025 00:19:03.025 real 0m17.902s 00:19:03.025 user 0m22.779s 00:19:03.025 sys 0m6.557s 00:19:03.025 16:35:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:03.025 16:35:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.025 ************************************ 00:19:03.025 END TEST nvmf_fips 00:19:03.025 ************************************ 00:19:03.025 16:35:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:03.025 16:35:42 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:03.025 16:35:42 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:03.025 16:35:42 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:03.025 16:35:42 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:03.025 16:35:42 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.025 16:35:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:04.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:04.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:04.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:04.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:04.941 16:35:44 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:04.941 16:35:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:04.941 16:35:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.941 16:35:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.941 ************************************ 00:19:04.941 START TEST nvmf_perf_adq 00:19:04.941 ************************************ 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:04.941 * Looking for test storage... 00:19:04.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.941 16:35:44 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.942 16:35:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.840 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:06.841 16:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:07.772 16:35:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:09.668 16:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.933 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:14.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:14.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:14.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.934 16:35:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:19:14.934 00:19:14.934 --- 10.0.0.2 ping statistics --- 00:19:14.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.934 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:19:14.934 00:19:14.934 --- 10.0.0.1 ping statistics --- 00:19:14.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.934 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1551574 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1551574 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1551574 ']' 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.934 16:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.934 [2024-07-15 16:35:54.188034] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:14.934 [2024-07-15 16:35:54.188109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.934 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.934 [2024-07-15 16:35:54.258421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.934 [2024-07-15 16:35:54.376523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.935 [2024-07-15 16:35:54.376583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.935 [2024-07-15 16:35:54.376603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.935 [2024-07-15 16:35:54.376616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.935 [2024-07-15 16:35:54.376627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.935 [2024-07-15 16:35:54.376697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.935 [2024-07-15 16:35:54.376767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.935 [2024-07-15 16:35:54.376859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.935 [2024-07-15 16:35:54.376862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 [2024-07-15 16:35:55.309835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 Malloc1 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.867 [2024-07-15 16:35:55.363286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1551733 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:15.867 16:35:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:15.867 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:18.394 "tick_rate": 2700000000, 00:19:18.394 "poll_groups": [ 00:19:18.394 { 00:19:18.394 "name": "nvmf_tgt_poll_group_000", 00:19:18.394 "admin_qpairs": 1, 00:19:18.394 "io_qpairs": 1, 00:19:18.394 "current_admin_qpairs": 1, 00:19:18.394 "current_io_qpairs": 1, 00:19:18.394 "pending_bdev_io": 0, 00:19:18.394 "completed_nvme_io": 21121, 00:19:18.394 "transports": [ 00:19:18.394 { 00:19:18.394 "trtype": "TCP" 00:19:18.394 } 00:19:18.394 ] 00:19:18.394 }, 00:19:18.394 { 00:19:18.394 "name": "nvmf_tgt_poll_group_001", 00:19:18.394 "admin_qpairs": 0, 00:19:18.394 "io_qpairs": 1, 00:19:18.394 "current_admin_qpairs": 0, 00:19:18.394 "current_io_qpairs": 1, 00:19:18.394 "pending_bdev_io": 0, 00:19:18.394 "completed_nvme_io": 20848, 00:19:18.394 "transports": [ 00:19:18.394 { 00:19:18.394 "trtype": "TCP" 00:19:18.394 } 00:19:18.394 ] 00:19:18.394 }, 00:19:18.394 { 00:19:18.394 "name": "nvmf_tgt_poll_group_002", 00:19:18.394 "admin_qpairs": 0, 00:19:18.394 "io_qpairs": 1, 00:19:18.394 "current_admin_qpairs": 0, 00:19:18.394 "current_io_qpairs": 1, 00:19:18.394 "pending_bdev_io": 0, 00:19:18.394 "completed_nvme_io": 17774, 00:19:18.394 "transports": [ 00:19:18.394 { 00:19:18.394 "trtype": "TCP" 00:19:18.394 } 00:19:18.394 ] 00:19:18.394 }, 00:19:18.394 { 00:19:18.394 "name": "nvmf_tgt_poll_group_003", 00:19:18.394 "admin_qpairs": 0, 00:19:18.394 "io_qpairs": 1, 00:19:18.394 "current_admin_qpairs": 0, 00:19:18.394 "current_io_qpairs": 1, 00:19:18.394 "pending_bdev_io": 0, 00:19:18.394 "completed_nvme_io": 21189, 00:19:18.394 "transports": [ 00:19:18.394 { 00:19:18.394 "trtype": "TCP" 00:19:18.394 } 00:19:18.394 ] 00:19:18.394 } 00:19:18.394 ] 00:19:18.394 }' 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:18.394 16:35:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1551733 00:19:26.495 Initializing NVMe Controllers 00:19:26.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:26.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:26.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:26.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:26.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:26.495 Initialization complete. Launching workers. 00:19:26.495 ======================================================== 00:19:26.495 Latency(us) 00:19:26.495 Device Information : IOPS MiB/s Average min max 00:19:26.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11068.30 43.24 5782.73 2472.24 8166.15 00:19:26.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10915.50 42.64 5863.93 1781.40 9611.41 00:19:26.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9280.40 36.25 6896.27 3302.41 11295.65 00:19:26.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11037.00 43.11 5798.18 4259.77 7344.80 00:19:26.495 ======================================================== 00:19:26.495 Total : 42301.20 165.24 6052.01 1781.40 11295.65 00:19:26.495 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.495 rmmod nvme_tcp 00:19:26.495 rmmod nvme_fabrics 00:19:26.495 rmmod nvme_keyring 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1551574 ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1551574 ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1551574' 00:19:26.495 killing process with pid 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1551574 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.495 16:36:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.401 16:36:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.401 16:36:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:28.401 16:36:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:29.379 16:36:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:31.279 16:36:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.547 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:19:36.548 00:19:36.548 --- 10.0.0.2 ping statistics --- 00:19:36.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.548 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:19:36.548 00:19:36.548 --- 10.0.0.1 ping statistics --- 00:19:36.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.548 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:36.548 net.core.busy_poll = 1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:36.548 net.core.busy_read = 1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:36.548 16:36:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1554992 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1554992 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1554992 ']' 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.548 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.548 [2024-07-15 16:36:16.060802] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:36.548 [2024-07-15 16:36:16.060917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.548 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.548 [2024-07-15 16:36:16.124046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.806 [2024-07-15 16:36:16.235208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.806 [2024-07-15 16:36:16.235264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.806 [2024-07-15 16:36:16.235277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.806 [2024-07-15 16:36:16.235288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.806 [2024-07-15 16:36:16.235298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.806 [2024-07-15 16:36:16.235347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.806 [2024-07-15 16:36:16.235407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.806 [2024-07-15 16:36:16.235473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.806 [2024-07-15 16:36:16.235475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.806 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.063 [2024-07-15 16:36:16.464583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.063 Malloc1 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.063 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:37.064 [2024-07-15 16:36:16.515618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1555127 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:37.064 16:36:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:37.064 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:38.974 "tick_rate": 2700000000, 00:19:38.974 "poll_groups": [ 00:19:38.974 { 00:19:38.974 "name": "nvmf_tgt_poll_group_000", 00:19:38.974 "admin_qpairs": 1, 00:19:38.974 "io_qpairs": 3, 00:19:38.974 "current_admin_qpairs": 1, 00:19:38.974 "current_io_qpairs": 3, 00:19:38.974 "pending_bdev_io": 0, 00:19:38.974 "completed_nvme_io": 27882, 00:19:38.974 "transports": [ 00:19:38.974 { 00:19:38.974 "trtype": "TCP" 00:19:38.974 } 00:19:38.974 ] 00:19:38.974 }, 00:19:38.974 { 00:19:38.974 "name": "nvmf_tgt_poll_group_001", 00:19:38.974 "admin_qpairs": 0, 00:19:38.974 "io_qpairs": 1, 00:19:38.974 "current_admin_qpairs": 0, 00:19:38.974 "current_io_qpairs": 1, 00:19:38.974 "pending_bdev_io": 0, 00:19:38.974 "completed_nvme_io": 22345, 00:19:38.974 "transports": [ 00:19:38.974 { 00:19:38.974 "trtype": "TCP" 00:19:38.974 } 00:19:38.974 ] 00:19:38.974 }, 00:19:38.974 { 00:19:38.974 "name": "nvmf_tgt_poll_group_002", 00:19:38.974 "admin_qpairs": 0, 00:19:38.974 "io_qpairs": 0, 00:19:38.974 "current_admin_qpairs": 0, 00:19:38.974 "current_io_qpairs": 0, 00:19:38.974 "pending_bdev_io": 0, 00:19:38.974 "completed_nvme_io": 0, 00:19:38.974 "transports": [ 00:19:38.974 { 00:19:38.974 "trtype": "TCP" 00:19:38.974 } 00:19:38.974 ] 00:19:38.974 }, 00:19:38.974 { 00:19:38.974 "name": "nvmf_tgt_poll_group_003", 00:19:38.974 "admin_qpairs": 0, 00:19:38.974 "io_qpairs": 0, 00:19:38.974 "current_admin_qpairs": 0, 00:19:38.974 "current_io_qpairs": 0, 00:19:38.974 "pending_bdev_io": 0, 00:19:38.974 "completed_nvme_io": 0, 00:19:38.974 "transports": [ 00:19:38.974 { 00:19:38.974 "trtype": "TCP" 00:19:38.974 } 00:19:38.974 ] 00:19:38.974 } 00:19:38.974 ] 00:19:38.974 }' 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:38.974 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:39.231 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:39.231 16:36:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1555127 00:19:47.331 Initializing NVMe Controllers 00:19:47.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:47.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:47.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:47.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:47.331 Initialization complete. Launching workers. 00:19:47.331 ======================================================== 00:19:47.331 Latency(us) 00:19:47.331 Device Information : IOPS MiB/s Average min max 00:19:47.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11758.20 45.93 5442.96 1737.72 49145.59 00:19:47.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4396.80 17.18 14564.54 2575.35 60442.35 00:19:47.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5083.20 19.86 12598.75 1543.74 59047.83 00:19:47.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5171.90 20.20 12379.58 2303.97 58998.66 00:19:47.331 ======================================================== 00:19:47.331 Total : 26410.10 103.16 9697.23 1543.74 60442.35 00:19:47.331 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.331 rmmod nvme_tcp 00:19:47.331 rmmod nvme_fabrics 00:19:47.331 rmmod nvme_keyring 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1554992 ']' 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1554992 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1554992 ']' 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1554992 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554992 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554992' 00:19:47.331 killing process with pid 1554992 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1554992 00:19:47.331 16:36:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1554992 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.590 16:36:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.495 16:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.495 16:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:49.495 00:19:49.495 real 0m44.656s 00:19:49.495 user 2m35.895s 00:19:49.495 sys 0m12.070s 00:19:49.495 16:36:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.495 16:36:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.495 ************************************ 00:19:49.495 END TEST nvmf_perf_adq 00:19:49.495 ************************************ 00:19:49.495 16:36:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:49.495 16:36:29 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:49.495 16:36:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:49.495 16:36:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.495 16:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.755 ************************************ 00:19:49.755 START TEST nvmf_shutdown 00:19:49.755 ************************************ 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:49.755 * Looking for test storage... 00:19:49.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.755 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:49.756 ************************************ 00:19:49.756 START TEST nvmf_shutdown_tc1 00:19:49.756 ************************************ 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.756 16:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:51.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:51.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:51.661 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:51.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:51.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:51.662 00:19:51.662 --- 10.0.0.2 ping statistics --- 00:19:51.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.662 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:19:51.662 00:19:51.662 --- 10.0.0.1 ping statistics --- 00:19:51.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.662 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1558282 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1558282 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1558282 ']' 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.662 16:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.662 [2024-07-15 16:36:31.232965] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:51.662 [2024-07-15 16:36:31.233052] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.921 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.921 [2024-07-15 16:36:31.302791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.921 [2024-07-15 16:36:31.420270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.921 [2024-07-15 16:36:31.420336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.921 [2024-07-15 16:36:31.420360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.921 [2024-07-15 16:36:31.420380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.921 [2024-07-15 16:36:31.420397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.921 [2024-07-15 16:36:31.420497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.921 [2024-07-15 16:36:31.420611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.921 [2024-07-15 16:36:31.420680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:51.921 [2024-07-15 16:36:31.420687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.885 [2024-07-15 16:36:32.214906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.885 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.885 Malloc1 00:19:52.885 [2024-07-15 16:36:32.289938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.885 Malloc2 00:19:52.885 Malloc3 00:19:52.885 Malloc4 00:19:52.885 Malloc5 00:19:53.144 Malloc6 00:19:53.144 Malloc7 00:19:53.144 Malloc8 00:19:53.144 Malloc9 00:19:53.144 Malloc10 00:19:53.144 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.144 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:53.144 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.144 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1558473 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1558473 /var/tmp/bdevperf.sock 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1558473 ']' 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.405 { 00:19:53.405 "params": { 00:19:53.405 "name": "Nvme$subsystem", 00:19:53.405 "trtype": "$TEST_TRANSPORT", 00:19:53.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.405 "adrfam": "ipv4", 00:19:53.405 "trsvcid": "$NVMF_PORT", 00:19:53.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.405 "hdgst": ${hdgst:-false}, 00:19:53.405 "ddgst": ${ddgst:-false} 00:19:53.405 }, 00:19:53.405 "method": "bdev_nvme_attach_controller" 00:19:53.405 } 00:19:53.405 EOF 00:19:53.405 )") 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.405 { 00:19:53.405 "params": { 00:19:53.405 "name": "Nvme$subsystem", 00:19:53.405 "trtype": "$TEST_TRANSPORT", 00:19:53.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.405 "adrfam": "ipv4", 00:19:53.405 "trsvcid": "$NVMF_PORT", 00:19:53.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.405 "hdgst": ${hdgst:-false}, 00:19:53.405 "ddgst": ${ddgst:-false} 00:19:53.405 }, 00:19:53.405 "method": "bdev_nvme_attach_controller" 00:19:53.405 } 00:19:53.405 EOF 00:19:53.405 )") 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.405 { 00:19:53.405 "params": { 00:19:53.405 "name": "Nvme$subsystem", 00:19:53.405 "trtype": "$TEST_TRANSPORT", 00:19:53.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.405 "adrfam": "ipv4", 00:19:53.405 "trsvcid": "$NVMF_PORT", 00:19:53.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.405 "hdgst": ${hdgst:-false}, 00:19:53.405 "ddgst": ${ddgst:-false} 00:19:53.405 }, 00:19:53.405 "method": "bdev_nvme_attach_controller" 00:19:53.405 } 00:19:53.405 EOF 00:19:53.405 )") 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.405 { 00:19:53.405 "params": { 00:19:53.405 "name": "Nvme$subsystem", 00:19:53.405 "trtype": "$TEST_TRANSPORT", 00:19:53.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.405 "adrfam": "ipv4", 00:19:53.405 "trsvcid": "$NVMF_PORT", 00:19:53.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.405 "hdgst": ${hdgst:-false}, 00:19:53.405 "ddgst": ${ddgst:-false} 00:19:53.405 }, 00:19:53.405 "method": "bdev_nvme_attach_controller" 00:19:53.405 } 00:19:53.405 EOF 00:19:53.405 )") 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.405 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.405 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.406 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.406 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.406 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.406 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.406 { 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme$subsystem", 00:19:53.406 "trtype": "$TEST_TRANSPORT", 00:19:53.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "$NVMF_PORT", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.406 "hdgst": ${hdgst:-false}, 00:19:53.406 "ddgst": ${ddgst:-false} 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 } 00:19:53.406 EOF 00:19:53.406 )") 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:53.406 16:36:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme1", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme2", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme3", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme4", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme5", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme6", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme7", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme8", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme9", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 },{ 00:19:53.406 "params": { 00:19:53.406 "name": "Nvme10", 00:19:53.406 "trtype": "tcp", 00:19:53.406 "traddr": "10.0.0.2", 00:19:53.406 "adrfam": "ipv4", 00:19:53.406 "trsvcid": "4420", 00:19:53.406 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.406 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.406 "hdgst": false, 00:19:53.406 "ddgst": false 00:19:53.406 }, 00:19:53.406 "method": "bdev_nvme_attach_controller" 00:19:53.406 }' 00:19:53.406 [2024-07-15 16:36:32.794441] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:53.407 [2024-07-15 16:36:32.794514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:53.407 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.407 [2024-07-15 16:36:32.857852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.407 [2024-07-15 16:36:32.967488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1558473 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:55.311 16:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:56.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1558473 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1558282 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:56.246 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.247 { 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme$subsystem", 00:19:56.247 "trtype": "$TEST_TRANSPORT", 00:19:56.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "$NVMF_PORT", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.247 "hdgst": ${hdgst:-false}, 00:19:56.247 "ddgst": ${ddgst:-false} 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 } 00:19:56.247 EOF 00:19:56.247 )") 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:56.247 16:36:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme1", 00:19:56.247 "trtype": "tcp", 00:19:56.247 "traddr": "10.0.0.2", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "4420", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.247 "hdgst": false, 00:19:56.247 "ddgst": false 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 },{ 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme2", 00:19:56.247 "trtype": "tcp", 00:19:56.247 "traddr": "10.0.0.2", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "4420", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.247 "hdgst": false, 00:19:56.247 "ddgst": false 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 },{ 00:19:56.247 "params": { 00:19:56.247 "name": "Nvme3", 00:19:56.247 "trtype": "tcp", 00:19:56.247 "traddr": "10.0.0.2", 00:19:56.247 "adrfam": "ipv4", 00:19:56.247 "trsvcid": "4420", 00:19:56.247 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:56.247 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:56.247 "hdgst": false, 00:19:56.247 "ddgst": false 00:19:56.247 }, 00:19:56.247 "method": "bdev_nvme_attach_controller" 00:19:56.247 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme4", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme5", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme6", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme7", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme8", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme9", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 },{ 00:19:56.248 "params": { 00:19:56.248 "name": "Nvme10", 00:19:56.248 "trtype": "tcp", 00:19:56.248 "traddr": "10.0.0.2", 00:19:56.248 "adrfam": "ipv4", 00:19:56.248 "trsvcid": "4420", 00:19:56.248 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:56.248 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:56.248 "hdgst": false, 00:19:56.248 "ddgst": false 00:19:56.248 }, 00:19:56.248 "method": "bdev_nvme_attach_controller" 00:19:56.248 }' 00:19:56.507 [2024-07-15 16:36:35.850298] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:56.507 [2024-07-15 16:36:35.850375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558889 ] 00:19:56.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.507 [2024-07-15 16:36:35.914244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.507 [2024-07-15 16:36:36.024283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.882 Running I/O for 1 seconds... 00:19:59.256 00:19:59.256 Latency(us) 00:19:59.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme1n1 : 1.16 220.89 13.81 0.00 0.00 287035.16 25049.32 264085.81 00:19:59.256 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme2n1 : 1.07 184.45 11.53 0.00 0.00 334747.02 7621.59 287387.50 00:19:59.256 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme3n1 : 1.16 275.34 17.21 0.00 0.00 220862.54 8641.04 256318.58 00:19:59.256 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme4n1 : 1.15 281.72 17.61 0.00 0.00 213877.95 5000.15 251658.24 00:19:59.256 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme5n1 : 1.17 219.40 13.71 0.00 0.00 271170.94 22427.88 284280.60 00:19:59.256 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme6n1 : 1.13 169.18 10.57 0.00 0.00 345279.65 40972.14 309135.74 00:19:59.256 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme7n1 : 1.14 224.77 14.05 0.00 0.00 255503.93 19903.53 270299.59 00:19:59.256 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme8n1 : 1.18 271.65 16.98 0.00 0.00 208579.55 16117.00 253211.69 00:19:59.256 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme9n1 : 1.17 221.32 13.83 0.00 0.00 251354.51 1614.13 260978.92 00:19:59.256 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.256 Verification LBA range: start 0x0 length 0x400 00:19:59.256 Nvme10n1 : 1.18 216.36 13.52 0.00 0.00 253515.09 18155.90 306028.85 00:19:59.256 =================================================================================================================== 00:19:59.256 Total : 2285.09 142.82 0.00 0.00 256927.79 1614.13 309135.74 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.514 rmmod nvme_tcp 00:19:59.514 rmmod nvme_fabrics 00:19:59.514 rmmod nvme_keyring 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1558282 ']' 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1558282 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1558282 ']' 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1558282 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.514 16:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558282 00:19:59.514 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.514 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.514 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558282' 00:19:59.514 killing process with pid 1558282 00:19:59.514 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1558282 00:19:59.514 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1558282 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.080 16:36:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.985 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.985 00:20:01.985 real 0m12.379s 00:20:01.985 user 0m36.948s 00:20:01.985 sys 0m3.124s 00:20:01.985 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.985 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:01.985 ************************************ 00:20:01.985 END TEST nvmf_shutdown_tc1 00:20:01.985 ************************************ 00:20:02.244 16:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:02.244 16:36:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:02.244 16:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.244 16:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.244 16:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.244 ************************************ 00:20:02.244 START TEST nvmf_shutdown_tc2 00:20:02.245 ************************************ 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:02.245 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:02.245 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:02.245 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:02.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:02.245 00:20:02.245 --- 10.0.0.2 ping statistics --- 00:20:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.245 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:02.245 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:02.245 00:20:02.245 --- 10.0.0.1 ping statistics --- 00:20:02.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.246 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1559655 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1559655 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1559655 ']' 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.246 16:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.504 [2024-07-15 16:36:41.865148] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:02.504 [2024-07-15 16:36:41.865274] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.504 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.504 [2024-07-15 16:36:41.929670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.504 [2024-07-15 16:36:42.041582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.504 [2024-07-15 16:36:42.041630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.504 [2024-07-15 16:36:42.041660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.504 [2024-07-15 16:36:42.041672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.504 [2024-07-15 16:36:42.041682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.504 [2024-07-15 16:36:42.041764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.504 [2024-07-15 16:36:42.041794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.504 [2024-07-15 16:36:42.041852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.504 [2024-07-15 16:36:42.041855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 [2024-07-15 16:36:42.187571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.762 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 Malloc1 00:20:02.762 [2024-07-15 16:36:42.263280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.762 Malloc2 00:20:02.762 Malloc3 00:20:03.019 Malloc4 00:20:03.019 Malloc5 00:20:03.019 Malloc6 00:20:03.019 Malloc7 00:20:03.019 Malloc8 00:20:03.277 Malloc9 00:20:03.277 Malloc10 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1559834 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1559834 /var/tmp/bdevperf.sock 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1559834 ']' 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.277 { 00:20:03.277 "params": { 00:20:03.277 "name": "Nvme$subsystem", 00:20:03.277 "trtype": "$TEST_TRANSPORT", 00:20:03.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.277 "adrfam": "ipv4", 00:20:03.277 "trsvcid": "$NVMF_PORT", 00:20:03.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.277 "hdgst": ${hdgst:-false}, 00:20:03.277 "ddgst": ${ddgst:-false} 00:20:03.277 }, 00:20:03.277 "method": "bdev_nvme_attach_controller" 00:20:03.277 } 00:20:03.277 EOF 00:20:03.277 )") 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.277 { 00:20:03.277 "params": { 00:20:03.277 "name": "Nvme$subsystem", 00:20:03.277 "trtype": "$TEST_TRANSPORT", 00:20:03.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.277 "adrfam": "ipv4", 00:20:03.277 "trsvcid": "$NVMF_PORT", 00:20:03.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.277 "hdgst": ${hdgst:-false}, 00:20:03.277 "ddgst": ${ddgst:-false} 00:20:03.277 }, 00:20:03.277 "method": "bdev_nvme_attach_controller" 00:20:03.277 } 00:20:03.277 EOF 00:20:03.277 )") 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.277 { 00:20:03.277 "params": { 00:20:03.277 "name": "Nvme$subsystem", 00:20:03.277 "trtype": "$TEST_TRANSPORT", 00:20:03.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.277 "adrfam": "ipv4", 00:20:03.277 "trsvcid": "$NVMF_PORT", 00:20:03.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.277 "hdgst": ${hdgst:-false}, 00:20:03.277 "ddgst": ${ddgst:-false} 00:20:03.277 }, 00:20:03.277 "method": "bdev_nvme_attach_controller" 00:20:03.277 } 00:20:03.277 EOF 00:20:03.277 )") 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.277 { 00:20:03.277 "params": { 00:20:03.277 "name": "Nvme$subsystem", 00:20:03.277 "trtype": "$TEST_TRANSPORT", 00:20:03.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.277 "adrfam": "ipv4", 00:20:03.277 "trsvcid": "$NVMF_PORT", 00:20:03.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.277 "hdgst": ${hdgst:-false}, 00:20:03.277 "ddgst": ${ddgst:-false} 00:20:03.277 }, 00:20:03.277 "method": "bdev_nvme_attach_controller" 00:20:03.277 } 00:20:03.277 EOF 00:20:03.277 )") 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.277 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.277 { 00:20:03.277 "params": { 00:20:03.277 "name": "Nvme$subsystem", 00:20:03.277 "trtype": "$TEST_TRANSPORT", 00:20:03.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.277 "adrfam": "ipv4", 00:20:03.277 "trsvcid": "$NVMF_PORT", 00:20:03.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.278 { 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme$subsystem", 00:20:03.278 "trtype": "$TEST_TRANSPORT", 00:20:03.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "$NVMF_PORT", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.278 { 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme$subsystem", 00:20:03.278 "trtype": "$TEST_TRANSPORT", 00:20:03.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "$NVMF_PORT", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.278 { 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme$subsystem", 00:20:03.278 "trtype": "$TEST_TRANSPORT", 00:20:03.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "$NVMF_PORT", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.278 { 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme$subsystem", 00:20:03.278 "trtype": "$TEST_TRANSPORT", 00:20:03.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "$NVMF_PORT", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.278 { 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme$subsystem", 00:20:03.278 "trtype": "$TEST_TRANSPORT", 00:20:03.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "$NVMF_PORT", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.278 "hdgst": ${hdgst:-false}, 00:20:03.278 "ddgst": ${ddgst:-false} 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 } 00:20:03.278 EOF 00:20:03.278 )") 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.278 16:36:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme1", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme2", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme3", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme4", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme5", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme6", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme7", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme8", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.278 "params": { 00:20:03.278 "name": "Nvme9", 00:20:03.278 "trtype": "tcp", 00:20:03.278 "traddr": "10.0.0.2", 00:20:03.278 "adrfam": "ipv4", 00:20:03.278 "trsvcid": "4420", 00:20:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.278 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.278 "hdgst": false, 00:20:03.278 "ddgst": false 00:20:03.278 }, 00:20:03.278 "method": "bdev_nvme_attach_controller" 00:20:03.278 },{ 00:20:03.279 "params": { 00:20:03.279 "name": "Nvme10", 00:20:03.279 "trtype": "tcp", 00:20:03.279 "traddr": "10.0.0.2", 00:20:03.279 "adrfam": "ipv4", 00:20:03.279 "trsvcid": "4420", 00:20:03.279 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.279 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.279 "hdgst": false, 00:20:03.279 "ddgst": false 00:20:03.279 }, 00:20:03.279 "method": "bdev_nvme_attach_controller" 00:20:03.279 }' 00:20:03.279 [2024-07-15 16:36:42.754403] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:03.279 [2024-07-15 16:36:42.754480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559834 ] 00:20:03.279 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.279 [2024-07-15 16:36:42.816952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.538 [2024-07-15 16:36:42.927668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.441 Running I/O for 10 seconds... 00:20:05.441 16:36:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.441 16:36:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:05.441 16:36:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.441 16:36:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.441 16:36:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:05.700 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:05.957 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:05.957 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.957 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.957 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:05.958 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1559834 00:20:06.215 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1559834 ']' 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1559834 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1559834 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1559834' 00:20:06.216 killing process with pid 1559834 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1559834 00:20:06.216 16:36:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1559834 00:20:06.473 Received shutdown signal, test time was about 0.933429 seconds 00:20:06.473 00:20:06.473 Latency(us) 00:20:06.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.473 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme1n1 : 0.90 213.80 13.36 0.00 0.00 295364.08 22427.88 284280.60 00:20:06.473 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme2n1 : 0.87 220.79 13.80 0.00 0.00 280264.50 21165.70 268746.15 00:20:06.473 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme3n1 : 0.85 226.52 14.16 0.00 0.00 266736.58 24078.41 270299.59 00:20:06.473 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme4n1 : 0.89 293.91 18.37 0.00 0.00 201420.54 1128.68 236123.78 00:20:06.473 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme5n1 : 0.90 214.32 13.39 0.00 0.00 270332.46 28156.21 236123.78 00:20:06.473 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme6n1 : 0.86 228.78 14.30 0.00 0.00 245912.84 3932.16 240784.12 00:20:06.473 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme7n1 : 0.87 219.55 13.72 0.00 0.00 251794.77 39807.05 251658.24 00:20:06.473 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.473 Verification LBA range: start 0x0 length 0x400 00:20:06.473 Nvme8n1 : 0.93 271.30 16.96 0.00 0.00 190640.30 10679.94 251658.24 00:20:06.473 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.474 Verification LBA range: start 0x0 length 0x400 00:20:06.474 Nvme9n1 : 0.90 212.34 13.27 0.00 0.00 249769.53 23690.05 285834.05 00:20:06.474 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.474 Verification LBA range: start 0x0 length 0x400 00:20:06.474 Nvme10n1 : 0.88 218.80 13.68 0.00 0.00 235139.73 22719.15 268746.15 00:20:06.474 =================================================================================================================== 00:20:06.474 Total : 2320.10 145.01 0.00 0.00 245417.37 1128.68 285834.05 00:20:06.733 16:36:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.676 rmmod nvme_tcp 00:20:07.676 rmmod nvme_fabrics 00:20:07.676 rmmod nvme_keyring 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1559655 ']' 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1559655 ']' 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1559655' 00:20:07.676 killing process with pid 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1559655 00:20:07.676 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1559655 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.294 16:36:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.207 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.207 00:20:10.207 real 0m8.144s 00:20:10.207 user 0m25.236s 00:20:10.207 sys 0m1.552s 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.208 ************************************ 00:20:10.208 END TEST nvmf_shutdown_tc2 00:20:10.208 ************************************ 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.208 16:36:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.466 ************************************ 00:20:10.466 START TEST nvmf_shutdown_tc3 00:20:10.466 ************************************ 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:10.466 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:10.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:10.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:10.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:10.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.467 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:20:10.468 00:20:10.468 --- 10.0.0.2 ping statistics --- 00:20:10.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.468 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:20:10.468 00:20:10.468 --- 10.0.0.1 ping statistics --- 00:20:10.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.468 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.468 16:36:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1560755 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1560755 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1560755 ']' 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.468 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.468 [2024-07-15 16:36:50.057308] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:10.468 [2024-07-15 16:36:50.057392] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.727 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.727 [2024-07-15 16:36:50.127513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.727 [2024-07-15 16:36:50.247258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.727 [2024-07-15 16:36:50.247321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.727 [2024-07-15 16:36:50.247345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.727 [2024-07-15 16:36:50.247376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.727 [2024-07-15 16:36:50.247395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.728 [2024-07-15 16:36:50.247498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.728 [2024-07-15 16:36:50.247594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.728 [2024-07-15 16:36:50.247660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:10.728 [2024-07-15 16:36:50.247668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.986 [2024-07-15 16:36:50.405695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.986 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.986 Malloc1 00:20:10.986 [2024-07-15 16:36:50.494947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.986 Malloc2 00:20:10.986 Malloc3 00:20:11.244 Malloc4 00:20:11.244 Malloc5 00:20:11.244 Malloc6 00:20:11.244 Malloc7 00:20:11.244 Malloc8 00:20:11.513 Malloc9 00:20:11.513 Malloc10 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1560936 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1560936 /var/tmp/bdevperf.sock 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1560936 ']' 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.513 { 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme$subsystem", 00:20:11.513 "trtype": "$TEST_TRANSPORT", 00:20:11.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "$NVMF_PORT", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.513 "hdgst": ${hdgst:-false}, 00:20:11.513 "ddgst": ${ddgst:-false} 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 } 00:20:11.513 EOF 00:20:11.513 )") 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:11.513 16:36:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme1", 00:20:11.513 "trtype": "tcp", 00:20:11.513 "traddr": "10.0.0.2", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "4420", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.513 "hdgst": false, 00:20:11.513 "ddgst": false 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 },{ 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme2", 00:20:11.513 "trtype": "tcp", 00:20:11.513 "traddr": "10.0.0.2", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "4420", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.513 "hdgst": false, 00:20:11.513 "ddgst": false 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 },{ 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme3", 00:20:11.513 "trtype": "tcp", 00:20:11.513 "traddr": "10.0.0.2", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "4420", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.513 "hdgst": false, 00:20:11.513 "ddgst": false 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 },{ 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme4", 00:20:11.513 "trtype": "tcp", 00:20:11.513 "traddr": "10.0.0.2", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "4420", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.513 "hdgst": false, 00:20:11.513 "ddgst": false 00:20:11.513 }, 00:20:11.513 "method": "bdev_nvme_attach_controller" 00:20:11.513 },{ 00:20:11.513 "params": { 00:20:11.513 "name": "Nvme5", 00:20:11.513 "trtype": "tcp", 00:20:11.513 "traddr": "10.0.0.2", 00:20:11.513 "adrfam": "ipv4", 00:20:11.513 "trsvcid": "4420", 00:20:11.513 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.513 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.513 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 },{ 00:20:11.514 "params": { 00:20:11.514 "name": "Nvme6", 00:20:11.514 "trtype": "tcp", 00:20:11.514 "traddr": "10.0.0.2", 00:20:11.514 "adrfam": "ipv4", 00:20:11.514 "trsvcid": "4420", 00:20:11.514 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.514 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.514 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 },{ 00:20:11.514 "params": { 00:20:11.514 "name": "Nvme7", 00:20:11.514 "trtype": "tcp", 00:20:11.514 "traddr": "10.0.0.2", 00:20:11.514 "adrfam": "ipv4", 00:20:11.514 "trsvcid": "4420", 00:20:11.514 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.514 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.514 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 },{ 00:20:11.514 "params": { 00:20:11.514 "name": "Nvme8", 00:20:11.514 "trtype": "tcp", 00:20:11.514 "traddr": "10.0.0.2", 00:20:11.514 "adrfam": "ipv4", 00:20:11.514 "trsvcid": "4420", 00:20:11.514 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.514 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.514 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 },{ 00:20:11.514 "params": { 00:20:11.514 "name": "Nvme9", 00:20:11.514 "trtype": "tcp", 00:20:11.514 "traddr": "10.0.0.2", 00:20:11.514 "adrfam": "ipv4", 00:20:11.514 "trsvcid": "4420", 00:20:11.514 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.514 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.514 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 },{ 00:20:11.514 "params": { 00:20:11.514 "name": "Nvme10", 00:20:11.514 "trtype": "tcp", 00:20:11.514 "traddr": "10.0.0.2", 00:20:11.514 "adrfam": "ipv4", 00:20:11.514 "trsvcid": "4420", 00:20:11.514 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.514 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.514 "hdgst": false, 00:20:11.514 "ddgst": false 00:20:11.514 }, 00:20:11.514 "method": "bdev_nvme_attach_controller" 00:20:11.514 }' 00:20:11.514 [2024-07-15 16:36:50.989964] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:11.514 [2024-07-15 16:36:50.990045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560936 ] 00:20:11.514 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.514 [2024-07-15 16:36:51.054756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.777 [2024-07-15 16:36:51.164240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.151 Running I/O for 10 seconds... 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:13.409 16:36:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1560755 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1560755 ']' 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1560755 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1560755 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1560755' 00:20:13.680 killing process with pid 1560755 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1560755 00:20:13.680 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1560755 00:20:13.680 [2024-07-15 16:36:53.042526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.042990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.043419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d01a0 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.044989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.680 [2024-07-15 16:36:53.045215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.045683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab3940 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.048998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.681 [2024-07-15 16:36:53.049109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.049217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0ae0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.050998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.051788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0fa0 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.053432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631600 is same with the state(5) to be set 00:20:13.682 [2024-07-15 16:36:53.053637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.682 [2024-07-15 16:36:53.053719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.682 [2024-07-15 16:36:53.053733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.053759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7c60 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.053803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.053839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.053866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.053903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.053936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1761240 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.053979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.053999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595830 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c1450 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.683 [2024-07-15 16:36:53.054416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8280 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-07-15 16:36:53.054964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 he state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.054982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.054993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 [2024-07-15 16:36:53.055018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.055032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 [2024-07-15 16:36:53.055045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.055059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 he state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 [2024-07-15 16:36:53.055086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.055099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-07-15 16:36:53.055112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 he state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t[2024-07-15 16:36:53.055127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.683 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.683 [2024-07-15 16:36:53.055144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.683 [2024-07-15 16:36:53.055157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.683 [2024-07-15 16:36:53.055162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1[2024-07-15 16:36:53.055210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 he state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.055237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 he state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.055368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 he state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.055432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 he state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t[2024-07-15 16:36:53.055511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1he state(5) to be set 00:20:13.684 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with t[2024-07-15 16:36:53.055528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.684 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d18e0 is same with the state(5) to be set 00:20:13.684 [2024-07-15 16:36:53.055578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.684 [2024-07-15 16:36:53.055714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.684 [2024-07-15 16:36:53.055731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.055973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.055989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1d80 is same with the state(5) to be set 00:20:13.685 [2024-07-15 16:36:53.056889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.685 [2024-07-15 16:36:53.056901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1d80 is same with t[2024-07-15 16:36:53.056905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.685 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.685 [2024-07-15 16:36:53.056928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1d80 is same with the state(5) to be set 00:20:13.685 [2024-07-15 16:36:53.056930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.056948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.056965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.056979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:13.686 [2024-07-15 16:36:53.057094] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16e42f0 was disconnected and freed. reset controller. 00:20:13.686 [2024-07-15 16:36:53.057497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.057949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1[2024-07-15 16:36:53.057957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.057983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.057986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.057999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:1[2024-07-15 16:36:53.058001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.058016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1[2024-07-15 16:36:53.058096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.058117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1he state(5) to be set 00:20:13.686 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.058192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 [2024-07-15 16:36:53.058286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1[2024-07-15 16:36:53.058312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.686 he state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.686 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.686 [2024-07-15 16:36:53.058341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.686 [2024-07-15 16:36:53.058346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-07-15 16:36:53.058409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 he state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.058473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 he state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1[2024-07-15 16:36:53.058525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 he state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-07-15 16:36:53.058589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 he state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.058665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 he state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with t[2024-07-15 16:36:53.058806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128he state(5) to be set 00:20:13.687 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2240 is same with the state(5) to be set 00:20:13.687 [2024-07-15 16:36:53.058838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.058976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.058995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.059011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.059026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.059040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.059056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.059069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.059085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.687 [2024-07-15 16:36:53.059099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.687 [2024-07-15 16:36:53.059114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.059595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 [2024-07-15 16:36:53.059623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.059629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devihe state(5) to be set 00:20:13.688 ce or address) on qpair id 1 00:20:13.688 [2024-07-15 16:36:53.059653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059710] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ed9f0 was disconnected and freed. reset controller. 00:20:13.688 [2024-07-15 16:36:53.059715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.059994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.060227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.060227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.688 he state(5) to be set 00:20:13.688 [2024-07-15 16:36:53.060252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:12he state(5) to be set 00:20:13.688 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.688 [2024-07-15 16:36:53.060274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.060274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 he state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:12he state(5) to be set 00:20:13.689 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.689 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:12[2024-07-15 16:36:53.060395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 he state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.060410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 he state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.689 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:12he state(5) to be set 00:20:13.689 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with t[2024-07-15 16:36:53.060548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:13.689 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.060587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 16:36:53.060611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 he state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.060635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d26e0 is same with the state(5) to be set 00:20:13.689 [2024-07-15 16:36:53.060639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.077974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.078016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.078031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.689 [2024-07-15 16:36:53.078045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.689 [2024-07-15 16:36:53.078061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.078971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.078987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.079000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.690 [2024-07-15 16:36:53.079096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:13.690 [2024-07-15 16:36:53.079271] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17f0390 was disconnected and freed. reset controller. 00:20:13.690 [2024-07-15 16:36:53.079690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.690 [2024-07-15 16:36:53.079715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.690 [2024-07-15 16:36:53.079746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.690 [2024-07-15 16:36:53.079773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.690 [2024-07-15 16:36:53.079800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175f350 is same with the state(5) to be set 00:20:13.690 [2024-07-15 16:36:53.079845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631600 (9): Bad file descriptor 00:20:13.690 [2024-07-15 16:36:53.079901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.690 [2024-07-15 16:36:53.079923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.690 [2024-07-15 16:36:53.079937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.079951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.079965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.079978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.079992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097610 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.080049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7c60 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.080080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1761240 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.080111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1595830 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.080140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c1450 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.080178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b8280 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.080223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1659bb0 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.080375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.691 [2024-07-15 16:36:53.080476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.080489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1659990 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.084386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:13.691 [2024-07-15 16:36:53.084432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:13.691 [2024-07-15 16:36:53.084969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:13.691 [2024-07-15 16:36:53.085006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1659bb0 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.085207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.691 [2024-07-15 16:36:53.085236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1761240 with addr=10.0.0.2, port=4420 00:20:13.691 [2024-07-15 16:36:53.085258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1761240 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.085392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.691 [2024-07-15 16:36:53.085418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b8280 with addr=10.0.0.2, port=4420 00:20:13.691 [2024-07-15 16:36:53.085433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8280 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.086648] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.086699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1761240 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.086725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b8280 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.086786] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.086867] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.086945] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.087025] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.087106] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.087172] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:13.691 [2024-07-15 16:36:53.087334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.691 [2024-07-15 16:36:53.087361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1659bb0 with addr=10.0.0.2, port=4420 00:20:13.691 [2024-07-15 16:36:53.087378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1659bb0 is same with the state(5) to be set 00:20:13.691 [2024-07-15 16:36:53.087394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:13.691 [2024-07-15 16:36:53.087407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:13.691 [2024-07-15 16:36:53.087422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:13.691 [2024-07-15 16:36:53.087443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:13.691 [2024-07-15 16:36:53.087458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:13.691 [2024-07-15 16:36:53.087472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:13.691 [2024-07-15 16:36:53.087602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.691 [2024-07-15 16:36:53.087625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.691 [2024-07-15 16:36:53.087642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1659bb0 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.087696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:13.691 [2024-07-15 16:36:53.087713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:13.691 [2024-07-15 16:36:53.087727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:13.691 [2024-07-15 16:36:53.087785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.691 [2024-07-15 16:36:53.089615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f350 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.089663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097610 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.089717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1659990 (9): Bad file descriptor 00:20:13.691 [2024-07-15 16:36:53.089862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.089894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.089922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.089938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.089955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.089970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.089985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.691 [2024-07-15 16:36:53.090260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.691 [2024-07-15 16:36:53.090279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.090997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.692 [2024-07-15 16:36:53.091406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.692 [2024-07-15 16:36:53.091419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.091845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.091859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629d70 is same with the state(5) to be set 00:20:13.693 [2024-07-15 16:36:53.093132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.093984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.693 [2024-07-15 16:36:53.093998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.693 [2024-07-15 16:36:53.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.094971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.094988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.095006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.095023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.095037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.095053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.095068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.095084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.095098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.095113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5780 is same with the state(5) to be set 00:20:13.694 [2024-07-15 16:36:53.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.694 [2024-07-15 16:36:53.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.694 [2024-07-15 16:36:53.096552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.096981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.096997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.695 [2024-07-15 16:36:53.097867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.695 [2024-07-15 16:36:53.097889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.097905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.097921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.097935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.097971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.097986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.098329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.098344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1590b50 is same with the state(5) to be set 00:20:13.696 [2024-07-15 16:36:53.099628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.099958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.099983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.696 [2024-07-15 16:36:53.100461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.696 [2024-07-15 16:36:53.100477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.100959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.100983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.101954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.101980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.102004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.102030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.102053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.102079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.102102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.102128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.697 [2024-07-15 16:36:53.102151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.697 [2024-07-15 16:36:53.102176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fc0 is same with the state(5) to be set 00:20:13.697 [2024-07-15 16:36:53.104023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.697 [2024-07-15 16:36:53.104059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:13.698 [2024-07-15 16:36:53.104079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:13.698 [2024-07-15 16:36:53.104104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:13.698 [2024-07-15 16:36:53.104615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.698 [2024-07-15 16:36:53.104648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1595830 with addr=10.0.0.2, port=4420 00:20:13.698 [2024-07-15 16:36:53.104666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595830 is same with the state(5) to be set 00:20:13.698 [2024-07-15 16:36:53.104805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.698 [2024-07-15 16:36:53.104830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c1450 with addr=10.0.0.2, port=4420 00:20:13.698 [2024-07-15 16:36:53.104845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c1450 is same with the state(5) to be set 00:20:13.698 [2024-07-15 16:36:53.104994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.698 [2024-07-15 16:36:53.105020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7c60 with addr=10.0.0.2, port=4420 00:20:13.698 [2024-07-15 16:36:53.105036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7c60 is same with the state(5) to be set 00:20:13.698 [2024-07-15 16:36:53.105176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.698 [2024-07-15 16:36:53.105200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1631600 with addr=10.0.0.2, port=4420 00:20:13.698 [2024-07-15 16:36:53.105215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631600 is same with the state(5) to be set 00:20:13.698 [2024-07-15 16:36:53.106068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.106981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.106997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.698 [2024-07-15 16:36:53.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.698 [2024-07-15 16:36:53.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.107983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.107998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.108013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.108027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.108043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.108057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.108072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eeec0 is same with the state(5) to be set 00:20:13.699 [2024-07-15 16:36:53.109343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.699 [2024-07-15 16:36:53.109719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.699 [2024-07-15 16:36:53.109735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.109976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.109992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.700 [2024-07-15 16:36:53.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.700 [2024-07-15 16:36:53.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.111319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.111334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1860 is same with the state(5) to be set 00:20:13.701 [2024-07-15 16:36:53.112575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.112978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.701 [2024-07-15 16:36:53.113327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.701 [2024-07-15 16:36:53.113343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.113974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.113998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.702 [2024-07-15 16:36:53.114557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.702 [2024-07-15 16:36:53.114572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2b10 is same with the state(5) to be set 00:20:13.702 [2024-07-15 16:36:53.117158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:13.702 [2024-07-15 16:36:53.117202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:13.702 [2024-07-15 16:36:53.117227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:13.703 [2024-07-15 16:36:53.117255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:13.703 [2024-07-15 16:36:53.117280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:13.703 task offset: 22272 on job bdev=Nvme2n1 fails 00:20:13.703 00:20:13.703 Latency(us) 00:20:13.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.703 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme1n1 ended in about 0.74 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme1n1 : 0.74 173.68 10.85 86.84 0.00 242360.57 19515.16 248551.35 00:20:13.703 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme2n1 ended in about 0.73 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme2n1 : 0.73 176.37 11.02 88.19 0.00 232448.51 25826.04 233016.89 00:20:13.703 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme3n1 ended in about 0.74 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme3n1 : 0.74 178.33 11.15 86.46 0.00 226434.06 17864.63 229910.00 00:20:13.703 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme4n1 ended in about 0.74 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme4n1 : 0.74 86.09 5.38 86.09 0.00 339448.41 35535.08 265639.25 00:20:13.703 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme5n1 ended in about 0.73 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme5n1 : 0.73 176.06 11.00 88.03 0.00 214673.13 21942.42 236123.78 00:20:13.703 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme6n1 ended in about 0.75 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme6n1 : 0.75 169.95 10.62 84.98 0.00 217459.42 18447.17 223696.21 00:20:13.703 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme7n1 ended in about 0.73 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme7n1 : 0.73 175.78 10.99 87.89 0.00 203230.63 21748.24 257872.02 00:20:13.703 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme8n1 ended in about 0.76 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme8n1 : 0.76 169.23 10.58 84.61 0.00 206766.59 23010.42 273406.48 00:20:13.703 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme9n1 ended in about 0.76 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme9n1 : 0.76 84.25 5.27 84.25 0.00 302883.27 21359.88 292047.83 00:20:13.703 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.703 Job: Nvme10n1 ended in about 0.75 seconds with error 00:20:13.703 Verification LBA range: start 0x0 length 0x400 00:20:13.703 Nvme10n1 : 0.75 85.63 5.35 85.63 0.00 287941.97 21068.61 267192.70 00:20:13.703 =================================================================================================================== 00:20:13.703 Total : 1475.37 92.21 862.97 0.00 240362.80 17864.63 292047.83 00:20:13.703 [2024-07-15 16:36:53.146615] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:13.703 [2024-07-15 16:36:53.146795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1595830 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.146828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c1450 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.146850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7c60 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.146869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631600 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.146941] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.146974] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.146997] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.147016] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.147036] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.147168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:13.703 [2024-07-15 16:36:53.147493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.147537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b8280 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.147556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b8280 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.147738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.147766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1761240 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.147782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1761240 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.147925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.147952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1659bb0 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.147968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1659bb0 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.148094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.148131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1097610 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.148147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097610 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.148277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.148302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1659990 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.148318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1659990 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.148334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.148347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.148362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.703 [2024-07-15 16:36:53.148384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.148400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.148413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:13.703 [2024-07-15 16:36:53.148431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.148445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.148459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:13.703 [2024-07-15 16:36:53.148476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.148490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.148504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:13.703 [2024-07-15 16:36:53.148539] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.148564] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.148584] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.148604] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:13.703 [2024-07-15 16:36:53.149492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.703 [2024-07-15 16:36:53.149518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.703 [2024-07-15 16:36:53.149536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.703 [2024-07-15 16:36:53.149557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.703 [2024-07-15 16:36:53.149682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.703 [2024-07-15 16:36:53.149708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175f350 with addr=10.0.0.2, port=4420 00:20:13.703 [2024-07-15 16:36:53.149724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175f350 is same with the state(5) to be set 00:20:13.703 [2024-07-15 16:36:53.149743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b8280 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.149763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1761240 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.149781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1659bb0 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.149799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097610 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.149817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1659990 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.150177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f350 (9): Bad file descriptor 00:20:13.703 [2024-07-15 16:36:53.150206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.150222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.150235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:13.703 [2024-07-15 16:36:53.150252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.150267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.150281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:13.703 [2024-07-15 16:36:53.150298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:13.703 [2024-07-15 16:36:53.150312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:13.703 [2024-07-15 16:36:53.150325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:13.704 [2024-07-15 16:36:53.150341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:13.704 [2024-07-15 16:36:53.150355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:13.704 [2024-07-15 16:36:53.150368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:13.704 [2024-07-15 16:36:53.150383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:13.704 [2024-07-15 16:36:53.150398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:13.704 [2024-07-15 16:36:53.150411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:13.704 [2024-07-15 16:36:53.150767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.704 [2024-07-15 16:36:53.150790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.704 [2024-07-15 16:36:53.150802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.704 [2024-07-15 16:36:53.150814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.704 [2024-07-15 16:36:53.150831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.704 [2024-07-15 16:36:53.150844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:13.704 [2024-07-15 16:36:53.150856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:13.704 [2024-07-15 16:36:53.150870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:13.704 [2024-07-15 16:36:53.150944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.288 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:14.288 16:36:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1560936 00:20:15.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1560936) - No such process 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.224 rmmod nvme_tcp 00:20:15.224 rmmod nvme_fabrics 00:20:15.224 rmmod nvme_keyring 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.224 16:36:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.230 00:20:17.230 real 0m6.911s 00:20:17.230 user 0m15.230s 00:20:17.230 sys 0m1.341s 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:17.230 ************************************ 00:20:17.230 END TEST nvmf_shutdown_tc3 00:20:17.230 ************************************ 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:17.230 00:20:17.230 real 0m27.652s 00:20:17.230 user 1m17.496s 00:20:17.230 sys 0m6.170s 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.230 16:36:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.230 ************************************ 00:20:17.230 END TEST nvmf_shutdown 00:20:17.230 ************************************ 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:17.230 16:36:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.230 16:36:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.230 16:36:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:17.230 16:36:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.230 16:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.489 ************************************ 00:20:17.489 START TEST nvmf_multicontroller 00:20:17.489 ************************************ 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:17.489 * Looking for test storage... 00:20:17.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.489 16:36:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:19.389 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:19.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:19.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:19.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:19.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.390 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:19.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:20:19.649 00:20:19.649 --- 10.0.0.2 ping statistics --- 00:20:19.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.649 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:20:19.650 00:20:19.650 --- 10.0.0.1 ping statistics --- 00:20:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.650 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.650 16:36:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1563317 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1563317 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1563317 ']' 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.650 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 [2024-07-15 16:36:59.072585] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:19.650 [2024-07-15 16:36:59.072662] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.650 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.650 [2024-07-15 16:36:59.137794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.650 [2024-07-15 16:36:59.246040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.650 [2024-07-15 16:36:59.246117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.650 [2024-07-15 16:36:59.246131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.650 [2024-07-15 16:36:59.246143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.650 [2024-07-15 16:36:59.246153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.650 [2024-07-15 16:36:59.246262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.650 [2024-07-15 16:36:59.246331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.650 [2024-07-15 16:36:59.246335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 [2024-07-15 16:36:59.383727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 Malloc0 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.908 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 [2024-07-15 16:36:59.440729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 [2024-07-15 16:36:59.448621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 Malloc1 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.909 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1563464 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1563464 /var/tmp/bdevperf.sock 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1563464 ']' 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.167 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.427 NVMe0n1 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.427 1 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:20.427 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.428 request: 00:20:20.428 { 00:20:20.428 "name": "NVMe0", 00:20:20.428 "trtype": "tcp", 00:20:20.428 "traddr": "10.0.0.2", 00:20:20.428 "adrfam": "ipv4", 00:20:20.428 "trsvcid": "4420", 00:20:20.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.428 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:20.428 "hostaddr": "10.0.0.2", 00:20:20.428 "hostsvcid": "60000", 00:20:20.428 "prchk_reftag": false, 00:20:20.428 "prchk_guard": false, 00:20:20.428 "hdgst": false, 00:20:20.428 "ddgst": false, 00:20:20.428 "method": "bdev_nvme_attach_controller", 00:20:20.428 "req_id": 1 00:20:20.428 } 00:20:20.428 Got JSON-RPC error response 00:20:20.428 response: 00:20:20.428 { 00:20:20.428 "code": -114, 00:20:20.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.428 } 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.428 request: 00:20:20.428 { 00:20:20.428 "name": "NVMe0", 00:20:20.428 "trtype": "tcp", 00:20:20.428 "traddr": "10.0.0.2", 00:20:20.428 "adrfam": "ipv4", 00:20:20.428 "trsvcid": "4420", 00:20:20.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.428 "hostaddr": "10.0.0.2", 00:20:20.428 "hostsvcid": "60000", 00:20:20.428 "prchk_reftag": false, 00:20:20.428 "prchk_guard": false, 00:20:20.428 "hdgst": false, 00:20:20.428 "ddgst": false, 00:20:20.428 "method": "bdev_nvme_attach_controller", 00:20:20.428 "req_id": 1 00:20:20.428 } 00:20:20.428 Got JSON-RPC error response 00:20:20.428 response: 00:20:20.428 { 00:20:20.428 "code": -114, 00:20:20.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.428 } 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.428 request: 00:20:20.428 { 00:20:20.428 "name": "NVMe0", 00:20:20.428 "trtype": "tcp", 00:20:20.428 "traddr": "10.0.0.2", 00:20:20.428 "adrfam": "ipv4", 00:20:20.428 "trsvcid": "4420", 00:20:20.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.428 "hostaddr": "10.0.0.2", 00:20:20.428 "hostsvcid": "60000", 00:20:20.428 "prchk_reftag": false, 00:20:20.428 "prchk_guard": false, 00:20:20.428 "hdgst": false, 00:20:20.428 "ddgst": false, 00:20:20.428 "multipath": "disable", 00:20:20.428 "method": "bdev_nvme_attach_controller", 00:20:20.428 "req_id": 1 00:20:20.428 } 00:20:20.428 Got JSON-RPC error response 00:20:20.428 response: 00:20:20.428 { 00:20:20.428 "code": -114, 00:20:20.428 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:20.428 } 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.428 request: 00:20:20.428 { 00:20:20.428 "name": "NVMe0", 00:20:20.428 "trtype": "tcp", 00:20:20.428 "traddr": "10.0.0.2", 00:20:20.428 "adrfam": "ipv4", 00:20:20.428 "trsvcid": "4420", 00:20:20.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.428 "hostaddr": "10.0.0.2", 00:20:20.428 "hostsvcid": "60000", 00:20:20.428 "prchk_reftag": false, 00:20:20.428 "prchk_guard": false, 00:20:20.428 "hdgst": false, 00:20:20.428 "ddgst": false, 00:20:20.428 "multipath": "failover", 00:20:20.428 "method": "bdev_nvme_attach_controller", 00:20:20.428 "req_id": 1 00:20:20.428 } 00:20:20.428 Got JSON-RPC error response 00:20:20.428 response: 00:20:20.428 { 00:20:20.428 "code": -114, 00:20:20.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:20.428 } 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.428 16:36:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.686 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.686 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:20.686 16:37:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.063 0 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1563464 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1563464 ']' 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1563464 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1563464 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1563464' 00:20:22.064 killing process with pid 1563464 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1563464 00:20:22.064 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1563464 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:22.323 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:22.323 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:22.323 [2024-07-15 16:36:59.548927] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:22.323 [2024-07-15 16:36:59.549030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563464 ] 00:20:22.324 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.324 [2024-07-15 16:36:59.608797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.324 [2024-07-15 16:36:59.715930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.324 [2024-07-15 16:37:00.248764] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b52ddc76-73fe-480e-b1b9-92688eb5cbdd already exists 00:20:22.324 [2024-07-15 16:37:00.248804] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b52ddc76-73fe-480e-b1b9-92688eb5cbdd alias for bdev NVMe1n1 00:20:22.324 [2024-07-15 16:37:00.248819] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:22.324 Running I/O for 1 seconds... 00:20:22.324 00:20:22.324 Latency(us) 00:20:22.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.324 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:22.324 NVMe0n1 : 1.01 16824.52 65.72 0.00 0.00 7574.54 7136.14 17379.18 00:20:22.324 =================================================================================================================== 00:20:22.324 Total : 16824.52 65.72 0.00 0.00 7574.54 7136.14 17379.18 00:20:22.324 Received shutdown signal, test time was about 1.000000 seconds 00:20:22.324 00:20:22.324 Latency(us) 00:20:22.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.324 =================================================================================================================== 00:20:22.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.324 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.324 rmmod nvme_tcp 00:20:22.324 rmmod nvme_fabrics 00:20:22.324 rmmod nvme_keyring 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1563317 ']' 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1563317 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1563317 ']' 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1563317 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1563317 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1563317' 00:20:22.324 killing process with pid 1563317 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1563317 00:20:22.324 16:37:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1563317 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.583 16:37:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.117 16:37:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.117 00:20:25.117 real 0m7.332s 00:20:25.117 user 0m11.342s 00:20:25.117 sys 0m2.235s 00:20:25.117 16:37:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.117 16:37:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:25.117 ************************************ 00:20:25.117 END TEST nvmf_multicontroller 00:20:25.117 ************************************ 00:20:25.117 16:37:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:25.117 16:37:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:25.117 16:37:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:25.117 16:37:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.117 16:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.117 ************************************ 00:20:25.117 START TEST nvmf_aer 00:20:25.117 ************************************ 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:25.117 * Looking for test storage... 00:20:25.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.117 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.118 16:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:27.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:27.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.024 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:27.025 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:27.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:27.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:20:27.025 00:20:27.025 --- 10.0.0.2 ping statistics --- 00:20:27.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.025 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:27.025 00:20:27.025 --- 10.0.0.1 ping statistics --- 00:20:27.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.025 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1565674 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1565674 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1565674 ']' 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.025 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.025 [2024-07-15 16:37:06.416644] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:27.025 [2024-07-15 16:37:06.416716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.025 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.025 [2024-07-15 16:37:06.483964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.025 [2024-07-15 16:37:06.590341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.025 [2024-07-15 16:37:06.590409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.025 [2024-07-15 16:37:06.590424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.025 [2024-07-15 16:37:06.590449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.025 [2024-07-15 16:37:06.590458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.025 [2024-07-15 16:37:06.590542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.025 [2024-07-15 16:37:06.590579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.025 [2024-07-15 16:37:06.590675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.025 [2024-07-15 16:37:06.590671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 [2024-07-15 16:37:06.734534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 Malloc0 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 [2024-07-15 16:37:06.785633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.283 [ 00:20:27.283 { 00:20:27.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:27.283 "subtype": "Discovery", 00:20:27.283 "listen_addresses": [], 00:20:27.283 "allow_any_host": true, 00:20:27.283 "hosts": [] 00:20:27.283 }, 00:20:27.283 { 00:20:27.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.283 "subtype": "NVMe", 00:20:27.283 "listen_addresses": [ 00:20:27.283 { 00:20:27.283 "trtype": "TCP", 00:20:27.283 "adrfam": "IPv4", 00:20:27.283 "traddr": "10.0.0.2", 00:20:27.283 "trsvcid": "4420" 00:20:27.283 } 00:20:27.283 ], 00:20:27.283 "allow_any_host": true, 00:20:27.283 "hosts": [], 00:20:27.283 "serial_number": "SPDK00000000000001", 00:20:27.283 "model_number": "SPDK bdev Controller", 00:20:27.283 "max_namespaces": 2, 00:20:27.283 "min_cntlid": 1, 00:20:27.283 "max_cntlid": 65519, 00:20:27.283 "namespaces": [ 00:20:27.283 { 00:20:27.283 "nsid": 1, 00:20:27.283 "bdev_name": "Malloc0", 00:20:27.283 "name": "Malloc0", 00:20:27.283 "nguid": "2659905ADC30420CA2FA4270EE8DDE2B", 00:20:27.283 "uuid": "2659905a-dc30-420c-a2fa-4270ee8dde2b" 00:20:27.283 } 00:20:27.283 ] 00:20:27.283 } 00:20:27.283 ] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1565698 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:27.283 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:27.283 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.542 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:27.542 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:27.542 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:27.542 16:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.542 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.801 Malloc1 00:20:27.801 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.801 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:27.801 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.801 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.802 [ 00:20:27.802 { 00:20:27.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:27.802 "subtype": "Discovery", 00:20:27.802 "listen_addresses": [], 00:20:27.802 "allow_any_host": true, 00:20:27.802 "hosts": [] 00:20:27.802 }, 00:20:27.802 { 00:20:27.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.802 "subtype": "NVMe", 00:20:27.802 "listen_addresses": [ 00:20:27.802 { 00:20:27.802 "trtype": "TCP", 00:20:27.802 "adrfam": "IPv4", 00:20:27.802 "traddr": "10.0.0.2", 00:20:27.802 "trsvcid": "4420" 00:20:27.802 } 00:20:27.802 ], 00:20:27.802 "allow_any_host": true, 00:20:27.802 "hosts": [], 00:20:27.802 "serial_number": "SPDK00000000000001", 00:20:27.802 "model_number": "SPDK bdev Controller", 00:20:27.802 "max_namespaces": 2, 00:20:27.802 "min_cntlid": 1, 00:20:27.802 "max_cntlid": 65519, 00:20:27.802 "namespaces": [ 00:20:27.802 { 00:20:27.802 "nsid": 1, 00:20:27.802 "bdev_name": "Malloc0", 00:20:27.802 "name": "Malloc0", 00:20:27.802 "nguid": "2659905ADC30420CA2FA4270EE8DDE2B", 00:20:27.802 "uuid": "2659905a-dc30-420c-a2fa-4270ee8dde2b" 00:20:27.802 }, 00:20:27.802 { 00:20:27.802 "nsid": 2, 00:20:27.802 "bdev_name": "Malloc1", 00:20:27.802 "name": "Malloc1", 00:20:27.802 "nguid": "1E960BD332664C11B9AC79CFE24B3B29", 00:20:27.802 "uuid": "1e960bd3-3266-4c11-b9ac-79cfe24b3b29" 00:20:27.802 } 00:20:27.802 ] 00:20:27.802 } 00:20:27.802 ] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1565698 00:20:27.802 Asynchronous Event Request test 00:20:27.802 Attaching to 10.0.0.2 00:20:27.802 Attached to 10.0.0.2 00:20:27.802 Registering asynchronous event callbacks... 00:20:27.802 Starting namespace attribute notice tests for all controllers... 00:20:27.802 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:27.802 aer_cb - Changed Namespace 00:20:27.802 Cleaning up... 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.802 rmmod nvme_tcp 00:20:27.802 rmmod nvme_fabrics 00:20:27.802 rmmod nvme_keyring 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1565674 ']' 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1565674 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1565674 ']' 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1565674 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1565674 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1565674' 00:20:27.802 killing process with pid 1565674 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1565674 00:20:27.802 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1565674 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.059 16:37:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.594 16:37:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.594 00:20:30.594 real 0m5.442s 00:20:30.594 user 0m4.558s 00:20:30.595 sys 0m1.855s 00:20:30.595 16:37:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.595 16:37:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:30.595 ************************************ 00:20:30.595 END TEST nvmf_aer 00:20:30.595 ************************************ 00:20:30.595 16:37:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:30.595 16:37:09 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:30.595 16:37:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.595 16:37:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.595 16:37:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.595 ************************************ 00:20:30.595 START TEST nvmf_async_init 00:20:30.595 ************************************ 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:30.595 * Looking for test storage... 00:20:30.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=01b7bce2c75d4d44878b191713c7980c 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.595 16:37:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:32.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:32.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.496 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:32.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:32.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:32.497 00:20:32.497 --- 10.0.0.2 ping statistics --- 00:20:32.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.497 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:32.497 00:20:32.497 --- 10.0.0.1 ping statistics --- 00:20:32.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.497 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1567754 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1567754 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1567754 ']' 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.497 16:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.497 [2024-07-15 16:37:11.952246] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:32.497 [2024-07-15 16:37:11.952318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.497 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.497 [2024-07-15 16:37:12.013986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.756 [2024-07-15 16:37:12.120659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.756 [2024-07-15 16:37:12.120707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.756 [2024-07-15 16:37:12.120727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.756 [2024-07-15 16:37:12.120737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.756 [2024-07-15 16:37:12.120746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.756 [2024-07-15 16:37:12.120771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 [2024-07-15 16:37:12.272645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 null0 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 01b7bce2c75d4d44878b191713c7980c 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:32.756 [2024-07-15 16:37:12.312936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.756 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.016 nvme0n1 00:20:33.016 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.016 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:33.016 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.016 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.016 [ 00:20:33.016 { 00:20:33.016 "name": "nvme0n1", 00:20:33.016 "aliases": [ 00:20:33.016 "01b7bce2-c75d-4d44-878b-191713c7980c" 00:20:33.016 ], 00:20:33.016 "product_name": "NVMe disk", 00:20:33.016 "block_size": 512, 00:20:33.016 "num_blocks": 2097152, 00:20:33.016 "uuid": "01b7bce2-c75d-4d44-878b-191713c7980c", 00:20:33.016 "assigned_rate_limits": { 00:20:33.016 "rw_ios_per_sec": 0, 00:20:33.016 "rw_mbytes_per_sec": 0, 00:20:33.016 "r_mbytes_per_sec": 0, 00:20:33.016 "w_mbytes_per_sec": 0 00:20:33.016 }, 00:20:33.016 "claimed": false, 00:20:33.016 "zoned": false, 00:20:33.016 "supported_io_types": { 00:20:33.016 "read": true, 00:20:33.016 "write": true, 00:20:33.016 "unmap": false, 00:20:33.016 "flush": true, 00:20:33.016 "reset": true, 00:20:33.016 "nvme_admin": true, 00:20:33.016 "nvme_io": true, 00:20:33.016 "nvme_io_md": false, 00:20:33.016 "write_zeroes": true, 00:20:33.016 "zcopy": false, 00:20:33.016 "get_zone_info": false, 00:20:33.016 "zone_management": false, 00:20:33.016 "zone_append": false, 00:20:33.016 "compare": true, 00:20:33.016 "compare_and_write": true, 00:20:33.016 "abort": true, 00:20:33.017 "seek_hole": false, 00:20:33.017 "seek_data": false, 00:20:33.017 "copy": true, 00:20:33.017 "nvme_iov_md": false 00:20:33.017 }, 00:20:33.017 "memory_domains": [ 00:20:33.017 { 00:20:33.017 "dma_device_id": "system", 00:20:33.017 "dma_device_type": 1 00:20:33.017 } 00:20:33.017 ], 00:20:33.017 "driver_specific": { 00:20:33.017 "nvme": [ 00:20:33.017 { 00:20:33.017 "trid": { 00:20:33.017 "trtype": "TCP", 00:20:33.017 "adrfam": "IPv4", 00:20:33.017 "traddr": "10.0.0.2", 00:20:33.017 "trsvcid": "4420", 00:20:33.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:33.017 }, 00:20:33.017 "ctrlr_data": { 00:20:33.017 "cntlid": 1, 00:20:33.017 "vendor_id": "0x8086", 00:20:33.017 "model_number": "SPDK bdev Controller", 00:20:33.017 "serial_number": "00000000000000000000", 00:20:33.017 "firmware_revision": "24.09", 00:20:33.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:33.017 "oacs": { 00:20:33.017 "security": 0, 00:20:33.017 "format": 0, 00:20:33.017 "firmware": 0, 00:20:33.017 "ns_manage": 0 00:20:33.017 }, 00:20:33.017 "multi_ctrlr": true, 00:20:33.017 "ana_reporting": false 00:20:33.017 }, 00:20:33.017 "vs": { 00:20:33.017 "nvme_version": "1.3" 00:20:33.017 }, 00:20:33.017 "ns_data": { 00:20:33.017 "id": 1, 00:20:33.017 "can_share": true 00:20:33.017 } 00:20:33.017 } 00:20:33.017 ], 00:20:33.017 "mp_policy": "active_passive" 00:20:33.017 } 00:20:33.017 } 00:20:33.017 ] 00:20:33.017 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.017 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:33.017 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.017 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.017 [2024-07-15 16:37:12.566007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:33.017 [2024-07-15 16:37:12.566095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfd090 (9): Bad file descriptor 00:20:33.276 [2024-07-15 16:37:12.708031] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.276 [ 00:20:33.276 { 00:20:33.276 "name": "nvme0n1", 00:20:33.276 "aliases": [ 00:20:33.276 "01b7bce2-c75d-4d44-878b-191713c7980c" 00:20:33.276 ], 00:20:33.276 "product_name": "NVMe disk", 00:20:33.276 "block_size": 512, 00:20:33.276 "num_blocks": 2097152, 00:20:33.276 "uuid": "01b7bce2-c75d-4d44-878b-191713c7980c", 00:20:33.276 "assigned_rate_limits": { 00:20:33.276 "rw_ios_per_sec": 0, 00:20:33.276 "rw_mbytes_per_sec": 0, 00:20:33.276 "r_mbytes_per_sec": 0, 00:20:33.276 "w_mbytes_per_sec": 0 00:20:33.276 }, 00:20:33.276 "claimed": false, 00:20:33.276 "zoned": false, 00:20:33.276 "supported_io_types": { 00:20:33.276 "read": true, 00:20:33.276 "write": true, 00:20:33.276 "unmap": false, 00:20:33.276 "flush": true, 00:20:33.276 "reset": true, 00:20:33.276 "nvme_admin": true, 00:20:33.276 "nvme_io": true, 00:20:33.276 "nvme_io_md": false, 00:20:33.276 "write_zeroes": true, 00:20:33.276 "zcopy": false, 00:20:33.276 "get_zone_info": false, 00:20:33.276 "zone_management": false, 00:20:33.276 "zone_append": false, 00:20:33.276 "compare": true, 00:20:33.276 "compare_and_write": true, 00:20:33.276 "abort": true, 00:20:33.276 "seek_hole": false, 00:20:33.276 "seek_data": false, 00:20:33.276 "copy": true, 00:20:33.276 "nvme_iov_md": false 00:20:33.276 }, 00:20:33.276 "memory_domains": [ 00:20:33.276 { 00:20:33.276 "dma_device_id": "system", 00:20:33.276 "dma_device_type": 1 00:20:33.276 } 00:20:33.276 ], 00:20:33.276 "driver_specific": { 00:20:33.276 "nvme": [ 00:20:33.276 { 00:20:33.276 "trid": { 00:20:33.276 "trtype": "TCP", 00:20:33.276 "adrfam": "IPv4", 00:20:33.276 "traddr": "10.0.0.2", 00:20:33.276 "trsvcid": "4420", 00:20:33.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:33.276 }, 00:20:33.276 "ctrlr_data": { 00:20:33.276 "cntlid": 2, 00:20:33.276 "vendor_id": "0x8086", 00:20:33.276 "model_number": "SPDK bdev Controller", 00:20:33.276 "serial_number": "00000000000000000000", 00:20:33.276 "firmware_revision": "24.09", 00:20:33.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:33.276 "oacs": { 00:20:33.276 "security": 0, 00:20:33.276 "format": 0, 00:20:33.276 "firmware": 0, 00:20:33.276 "ns_manage": 0 00:20:33.276 }, 00:20:33.276 "multi_ctrlr": true, 00:20:33.276 "ana_reporting": false 00:20:33.276 }, 00:20:33.276 "vs": { 00:20:33.276 "nvme_version": "1.3" 00:20:33.276 }, 00:20:33.276 "ns_data": { 00:20:33.276 "id": 1, 00:20:33.276 "can_share": true 00:20:33.276 } 00:20:33.276 } 00:20:33.276 ], 00:20:33.276 "mp_policy": "active_passive" 00:20:33.276 } 00:20:33.276 } 00:20:33.276 ] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.73GEXbg6Yp 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.73GEXbg6Yp 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.276 [2024-07-15 16:37:12.762637] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.276 [2024-07-15 16:37:12.762794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.73GEXbg6Yp 00:20:33.276 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.277 [2024-07-15 16:37:12.770657] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.73GEXbg6Yp 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.277 [2024-07-15 16:37:12.778687] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.277 [2024-07-15 16:37:12.778750] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.277 nvme0n1 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.277 [ 00:20:33.277 { 00:20:33.277 "name": "nvme0n1", 00:20:33.277 "aliases": [ 00:20:33.277 "01b7bce2-c75d-4d44-878b-191713c7980c" 00:20:33.277 ], 00:20:33.277 "product_name": "NVMe disk", 00:20:33.277 "block_size": 512, 00:20:33.277 "num_blocks": 2097152, 00:20:33.277 "uuid": "01b7bce2-c75d-4d44-878b-191713c7980c", 00:20:33.277 "assigned_rate_limits": { 00:20:33.277 "rw_ios_per_sec": 0, 00:20:33.277 "rw_mbytes_per_sec": 0, 00:20:33.277 "r_mbytes_per_sec": 0, 00:20:33.277 "w_mbytes_per_sec": 0 00:20:33.277 }, 00:20:33.277 "claimed": false, 00:20:33.277 "zoned": false, 00:20:33.277 "supported_io_types": { 00:20:33.277 "read": true, 00:20:33.277 "write": true, 00:20:33.277 "unmap": false, 00:20:33.277 "flush": true, 00:20:33.277 "reset": true, 00:20:33.277 "nvme_admin": true, 00:20:33.277 "nvme_io": true, 00:20:33.277 "nvme_io_md": false, 00:20:33.277 "write_zeroes": true, 00:20:33.277 "zcopy": false, 00:20:33.277 "get_zone_info": false, 00:20:33.277 "zone_management": false, 00:20:33.277 "zone_append": false, 00:20:33.277 "compare": true, 00:20:33.277 "compare_and_write": true, 00:20:33.277 "abort": true, 00:20:33.277 "seek_hole": false, 00:20:33.277 "seek_data": false, 00:20:33.277 "copy": true, 00:20:33.277 "nvme_iov_md": false 00:20:33.277 }, 00:20:33.277 "memory_domains": [ 00:20:33.277 { 00:20:33.277 "dma_device_id": "system", 00:20:33.277 "dma_device_type": 1 00:20:33.277 } 00:20:33.277 ], 00:20:33.277 "driver_specific": { 00:20:33.277 "nvme": [ 00:20:33.277 { 00:20:33.277 "trid": { 00:20:33.277 "trtype": "TCP", 00:20:33.277 "adrfam": "IPv4", 00:20:33.277 "traddr": "10.0.0.2", 00:20:33.277 "trsvcid": "4421", 00:20:33.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:33.277 }, 00:20:33.277 "ctrlr_data": { 00:20:33.277 "cntlid": 3, 00:20:33.277 "vendor_id": "0x8086", 00:20:33.277 "model_number": "SPDK bdev Controller", 00:20:33.277 "serial_number": "00000000000000000000", 00:20:33.277 "firmware_revision": "24.09", 00:20:33.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:33.277 "oacs": { 00:20:33.277 "security": 0, 00:20:33.277 "format": 0, 00:20:33.277 "firmware": 0, 00:20:33.277 "ns_manage": 0 00:20:33.277 }, 00:20:33.277 "multi_ctrlr": true, 00:20:33.277 "ana_reporting": false 00:20:33.277 }, 00:20:33.277 "vs": { 00:20:33.277 "nvme_version": "1.3" 00:20:33.277 }, 00:20:33.277 "ns_data": { 00:20:33.277 "id": 1, 00:20:33.277 "can_share": true 00:20:33.277 } 00:20:33.277 } 00:20:33.277 ], 00:20:33.277 "mp_policy": "active_passive" 00:20:33.277 } 00:20:33.277 } 00:20:33.277 ] 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.277 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.73GEXbg6Yp 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.535 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.535 rmmod nvme_tcp 00:20:33.536 rmmod nvme_fabrics 00:20:33.536 rmmod nvme_keyring 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1567754 ']' 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1567754 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1567754 ']' 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1567754 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1567754 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1567754' 00:20:33.536 killing process with pid 1567754 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1567754 00:20:33.536 [2024-07-15 16:37:12.963245] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.536 [2024-07-15 16:37:12.963281] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:33.536 16:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1567754 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.797 16:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.696 16:37:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.696 00:20:35.696 real 0m5.562s 00:20:35.696 user 0m2.134s 00:20:35.696 sys 0m1.810s 00:20:35.696 16:37:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.696 16:37:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:35.696 ************************************ 00:20:35.696 END TEST nvmf_async_init 00:20:35.696 ************************************ 00:20:35.954 16:37:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.954 16:37:15 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:35.954 16:37:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.954 16:37:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.954 16:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.954 ************************************ 00:20:35.954 START TEST dma 00:20:35.954 ************************************ 00:20:35.954 16:37:15 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:35.954 * Looking for test storage... 00:20:35.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.954 16:37:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.954 16:37:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.954 16:37:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.954 16:37:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.954 16:37:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.954 16:37:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.954 16:37:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.954 16:37:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:35.954 16:37:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.954 16:37:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.954 16:37:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:35.954 16:37:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:35.955 00:20:35.955 real 0m0.063s 00:20:35.955 user 0m0.025s 00:20:35.955 sys 0m0.043s 00:20:35.955 16:37:15 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.955 16:37:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:35.955 ************************************ 00:20:35.955 END TEST dma 00:20:35.955 ************************************ 00:20:35.955 16:37:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.955 16:37:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:35.955 16:37:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.955 16:37:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.955 16:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.955 ************************************ 00:20:35.955 START TEST nvmf_identify 00:20:35.955 ************************************ 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:35.955 * Looking for test storage... 00:20:35.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.955 16:37:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.862 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:37.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:37.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:37.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:37.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.863 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.123 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:38.123 00:20:38.123 --- 10.0.0.2 ping statistics --- 00:20:38.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.124 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:38.124 00:20:38.124 --- 10.0.0.1 ping statistics --- 00:20:38.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.124 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1569880 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1569880 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1569880 ']' 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.124 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.124 [2024-07-15 16:37:17.651633] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:38.124 [2024-07-15 16:37:17.651717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.124 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.124 [2024-07-15 16:37:17.719925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.383 [2024-07-15 16:37:17.832802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.383 [2024-07-15 16:37:17.832859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.383 [2024-07-15 16:37:17.832872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.383 [2024-07-15 16:37:17.832893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.383 [2024-07-15 16:37:17.832904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.383 [2024-07-15 16:37:17.832988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.383 [2024-07-15 16:37:17.833079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.383 [2024-07-15 16:37:17.833011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.383 [2024-07-15 16:37:17.833082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.383 [2024-07-15 16:37:17.964764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.383 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 16:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:38.645 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 Malloc0 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 [2024-07-15 16:37:18.046399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 [ 00:20:38.645 { 00:20:38.645 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:38.645 "subtype": "Discovery", 00:20:38.645 "listen_addresses": [ 00:20:38.645 { 00:20:38.645 "trtype": "TCP", 00:20:38.645 "adrfam": "IPv4", 00:20:38.645 "traddr": "10.0.0.2", 00:20:38.645 "trsvcid": "4420" 00:20:38.645 } 00:20:38.645 ], 00:20:38.645 "allow_any_host": true, 00:20:38.645 "hosts": [] 00:20:38.645 }, 00:20:38.645 { 00:20:38.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.645 "subtype": "NVMe", 00:20:38.645 "listen_addresses": [ 00:20:38.645 { 00:20:38.645 "trtype": "TCP", 00:20:38.645 "adrfam": "IPv4", 00:20:38.645 "traddr": "10.0.0.2", 00:20:38.645 "trsvcid": "4420" 00:20:38.645 } 00:20:38.645 ], 00:20:38.645 "allow_any_host": true, 00:20:38.645 "hosts": [], 00:20:38.645 "serial_number": "SPDK00000000000001", 00:20:38.645 "model_number": "SPDK bdev Controller", 00:20:38.645 "max_namespaces": 32, 00:20:38.645 "min_cntlid": 1, 00:20:38.645 "max_cntlid": 65519, 00:20:38.645 "namespaces": [ 00:20:38.645 { 00:20:38.645 "nsid": 1, 00:20:38.645 "bdev_name": "Malloc0", 00:20:38.645 "name": "Malloc0", 00:20:38.645 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:38.645 "eui64": "ABCDEF0123456789", 00:20:38.645 "uuid": "ca62769d-c252-4249-858d-73d5299fe145" 00:20:38.645 } 00:20:38.645 ] 00:20:38.645 } 00:20:38.645 ] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.645 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:38.645 [2024-07-15 16:37:18.089552] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:38.645 [2024-07-15 16:37:18.089597] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569902 ] 00:20:38.645 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.645 [2024-07-15 16:37:18.124345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:38.645 [2024-07-15 16:37:18.124409] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:38.645 [2024-07-15 16:37:18.124418] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:38.645 [2024-07-15 16:37:18.124433] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:38.645 [2024-07-15 16:37:18.124442] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:38.645 [2024-07-15 16:37:18.127921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:38.645 [2024-07-15 16:37:18.127991] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5b540 0 00:20:38.645 [2024-07-15 16:37:18.134889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:38.645 [2024-07-15 16:37:18.134910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:38.645 [2024-07-15 16:37:18.134919] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:38.645 [2024-07-15 16:37:18.134925] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:38.645 [2024-07-15 16:37:18.134977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.645 [2024-07-15 16:37:18.134989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.645 [2024-07-15 16:37:18.134997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.645 [2024-07-15 16:37:18.135014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:38.645 [2024-07-15 16:37:18.135040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.645 [2024-07-15 16:37:18.141889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.645 [2024-07-15 16:37:18.141907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.645 [2024-07-15 16:37:18.141914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.645 [2024-07-15 16:37:18.141922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.645 [2024-07-15 16:37:18.141942] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:38.645 [2024-07-15 16:37:18.141968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:38.645 [2024-07-15 16:37:18.141977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:38.645 [2024-07-15 16:37:18.141999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.645 [2024-07-15 16:37:18.142007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.645 [2024-07-15 16:37:18.142014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.645 [2024-07-15 16:37:18.142025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.645 [2024-07-15 16:37:18.142049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.645 [2024-07-15 16:37:18.142232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.645 [2024-07-15 16:37:18.142247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.142254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.142269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:38.646 [2024-07-15 16:37:18.142282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:38.646 [2024-07-15 16:37:18.142294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.142318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.142339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.142467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.142479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.142486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.142500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:38.646 [2024-07-15 16:37:18.142514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.142525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.142549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.142569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.142743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.142754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.142761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.142780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.142798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.142812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.142822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.142843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.142977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.142992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.142999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.143014] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:38.646 [2024-07-15 16:37:18.143022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.143035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.143145] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:38.646 [2024-07-15 16:37:18.143154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.143167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.143207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.143228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.143406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.143422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.143428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.143443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:38.646 [2024-07-15 16:37:18.143459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.143485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.143505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.143634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.143649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.143655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.143674] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:38.646 [2024-07-15 16:37:18.143682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:38.646 [2024-07-15 16:37:18.143696] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:38.646 [2024-07-15 16:37:18.143715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:38.646 [2024-07-15 16:37:18.143731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.143749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.646 [2024-07-15 16:37:18.143770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.143956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.646 [2024-07-15 16:37:18.143971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.646 [2024-07-15 16:37:18.143978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.143985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5b540): datao=0, datal=4096, cccid=0 00:20:38.646 [2024-07-15 16:37:18.143993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb3c0) on tqpair(0x1a5b540): expected_datao=0, payload_size=4096 00:20:38.646 [2024-07-15 16:37:18.144000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144011] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144019] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.144100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.144107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.144125] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:38.646 [2024-07-15 16:37:18.144138] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:38.646 [2024-07-15 16:37:18.144146] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:38.646 [2024-07-15 16:37:18.144155] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:38.646 [2024-07-15 16:37:18.144163] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:38.646 [2024-07-15 16:37:18.144171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:38.646 [2024-07-15 16:37:18.144185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:38.646 [2024-07-15 16:37:18.144197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.144225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.646 [2024-07-15 16:37:18.144247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.646 [2024-07-15 16:37:18.144442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.646 [2024-07-15 16:37:18.144458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.646 [2024-07-15 16:37:18.144465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.646 [2024-07-15 16:37:18.144483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.144506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.646 [2024-07-15 16:37:18.144516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5b540) 00:20:38.646 [2024-07-15 16:37:18.144538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.646 [2024-07-15 16:37:18.144547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.646 [2024-07-15 16:37:18.144560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.144569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.647 [2024-07-15 16:37:18.144578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.144585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.144591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.144599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.647 [2024-07-15 16:37:18.144608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:38.647 [2024-07-15 16:37:18.144627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:38.647 [2024-07-15 16:37:18.144640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.144647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.144672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.647 [2024-07-15 16:37:18.144694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb3c0, cid 0, qid 0 00:20:38.647 [2024-07-15 16:37:18.144704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb540, cid 1, qid 0 00:20:38.647 [2024-07-15 16:37:18.144712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb6c0, cid 2, qid 0 00:20:38.647 [2024-07-15 16:37:18.144735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.647 [2024-07-15 16:37:18.144743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb9c0, cid 4, qid 0 00:20:38.647 [2024-07-15 16:37:18.144906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.647 [2024-07-15 16:37:18.144921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.647 [2024-07-15 16:37:18.144932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.144940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb9c0) on tqpair=0x1a5b540 00:20:38.647 [2024-07-15 16:37:18.144949] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:38.647 [2024-07-15 16:37:18.144958] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:38.647 [2024-07-15 16:37:18.144975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.144985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.144995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.647 [2024-07-15 16:37:18.145016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb9c0, cid 4, qid 0 00:20:38.647 [2024-07-15 16:37:18.145200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.647 [2024-07-15 16:37:18.145212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.647 [2024-07-15 16:37:18.145219] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5b540): datao=0, datal=4096, cccid=4 00:20:38.647 [2024-07-15 16:37:18.145233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb9c0) on tqpair(0x1a5b540): expected_datao=0, payload_size=4096 00:20:38.647 [2024-07-15 16:37:18.145240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145250] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145257] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.647 [2024-07-15 16:37:18.145327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.647 [2024-07-15 16:37:18.145333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb9c0) on tqpair=0x1a5b540 00:20:38.647 [2024-07-15 16:37:18.145358] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:38.647 [2024-07-15 16:37:18.145396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.145418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.647 [2024-07-15 16:37:18.145429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.145450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.647 [2024-07-15 16:37:18.145476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb9c0, cid 4, qid 0 00:20:38.647 [2024-07-15 16:37:18.145488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbb40, cid 5, qid 0 00:20:38.647 [2024-07-15 16:37:18.145664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.647 [2024-07-15 16:37:18.145679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.647 [2024-07-15 16:37:18.145686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145692] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5b540): datao=0, datal=1024, cccid=4 00:20:38.647 [2024-07-15 16:37:18.145704] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb9c0) on tqpair(0x1a5b540): expected_datao=0, payload_size=1024 00:20:38.647 [2024-07-15 16:37:18.145712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145721] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145728] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.647 [2024-07-15 16:37:18.145746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.647 [2024-07-15 16:37:18.145752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.145759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abbb40) on tqpair=0x1a5b540 00:20:38.647 [2024-07-15 16:37:18.187891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.647 [2024-07-15 16:37:18.187909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.647 [2024-07-15 16:37:18.187917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.187924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb9c0) on tqpair=0x1a5b540 00:20:38.647 [2024-07-15 16:37:18.187941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.187950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.187961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.647 [2024-07-15 16:37:18.188006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb9c0, cid 4, qid 0 00:20:38.647 [2024-07-15 16:37:18.188169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.647 [2024-07-15 16:37:18.188185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.647 [2024-07-15 16:37:18.188192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.188198] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5b540): datao=0, datal=3072, cccid=4 00:20:38.647 [2024-07-15 16:37:18.188206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb9c0) on tqpair(0x1a5b540): expected_datao=0, payload_size=3072 00:20:38.647 [2024-07-15 16:37:18.188214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.188239] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.188248] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.235895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.647 [2024-07-15 16:37:18.235915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.647 [2024-07-15 16:37:18.235923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.235930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb9c0) on tqpair=0x1a5b540 00:20:38.647 [2024-07-15 16:37:18.235946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.235954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5b540) 00:20:38.647 [2024-07-15 16:37:18.235965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.647 [2024-07-15 16:37:18.235996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb9c0, cid 4, qid 0 00:20:38.647 [2024-07-15 16:37:18.236144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.647 [2024-07-15 16:37:18.236160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.647 [2024-07-15 16:37:18.236166] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.236173] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5b540): datao=0, datal=8, cccid=4 00:20:38.647 [2024-07-15 16:37:18.236180] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb9c0) on tqpair(0x1a5b540): expected_datao=0, payload_size=8 00:20:38.647 [2024-07-15 16:37:18.236193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.236203] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.647 [2024-07-15 16:37:18.236211] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.914 [2024-07-15 16:37:18.276996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.914 [2024-07-15 16:37:18.277016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.914 [2024-07-15 16:37:18.277023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.914 [2024-07-15 16:37:18.277030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb9c0) on tqpair=0x1a5b540 00:20:38.914 ===================================================== 00:20:38.914 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:38.914 ===================================================== 00:20:38.914 Controller Capabilities/Features 00:20:38.914 ================================ 00:20:38.914 Vendor ID: 0000 00:20:38.914 Subsystem Vendor ID: 0000 00:20:38.914 Serial Number: .................... 00:20:38.914 Model Number: ........................................ 00:20:38.914 Firmware Version: 24.09 00:20:38.914 Recommended Arb Burst: 0 00:20:38.914 IEEE OUI Identifier: 00 00 00 00:20:38.914 Multi-path I/O 00:20:38.914 May have multiple subsystem ports: No 00:20:38.914 May have multiple controllers: No 00:20:38.914 Associated with SR-IOV VF: No 00:20:38.914 Max Data Transfer Size: 131072 00:20:38.914 Max Number of Namespaces: 0 00:20:38.914 Max Number of I/O Queues: 1024 00:20:38.914 NVMe Specification Version (VS): 1.3 00:20:38.914 NVMe Specification Version (Identify): 1.3 00:20:38.914 Maximum Queue Entries: 128 00:20:38.914 Contiguous Queues Required: Yes 00:20:38.914 Arbitration Mechanisms Supported 00:20:38.914 Weighted Round Robin: Not Supported 00:20:38.914 Vendor Specific: Not Supported 00:20:38.914 Reset Timeout: 15000 ms 00:20:38.914 Doorbell Stride: 4 bytes 00:20:38.914 NVM Subsystem Reset: Not Supported 00:20:38.914 Command Sets Supported 00:20:38.914 NVM Command Set: Supported 00:20:38.914 Boot Partition: Not Supported 00:20:38.914 Memory Page Size Minimum: 4096 bytes 00:20:38.914 Memory Page Size Maximum: 4096 bytes 00:20:38.914 Persistent Memory Region: Not Supported 00:20:38.914 Optional Asynchronous Events Supported 00:20:38.914 Namespace Attribute Notices: Not Supported 00:20:38.914 Firmware Activation Notices: Not Supported 00:20:38.914 ANA Change Notices: Not Supported 00:20:38.914 PLE Aggregate Log Change Notices: Not Supported 00:20:38.914 LBA Status Info Alert Notices: Not Supported 00:20:38.914 EGE Aggregate Log Change Notices: Not Supported 00:20:38.914 Normal NVM Subsystem Shutdown event: Not Supported 00:20:38.914 Zone Descriptor Change Notices: Not Supported 00:20:38.914 Discovery Log Change Notices: Supported 00:20:38.914 Controller Attributes 00:20:38.914 128-bit Host Identifier: Not Supported 00:20:38.914 Non-Operational Permissive Mode: Not Supported 00:20:38.914 NVM Sets: Not Supported 00:20:38.914 Read Recovery Levels: Not Supported 00:20:38.914 Endurance Groups: Not Supported 00:20:38.914 Predictable Latency Mode: Not Supported 00:20:38.914 Traffic Based Keep ALive: Not Supported 00:20:38.914 Namespace Granularity: Not Supported 00:20:38.914 SQ Associations: Not Supported 00:20:38.914 UUID List: Not Supported 00:20:38.914 Multi-Domain Subsystem: Not Supported 00:20:38.914 Fixed Capacity Management: Not Supported 00:20:38.914 Variable Capacity Management: Not Supported 00:20:38.914 Delete Endurance Group: Not Supported 00:20:38.914 Delete NVM Set: Not Supported 00:20:38.914 Extended LBA Formats Supported: Not Supported 00:20:38.914 Flexible Data Placement Supported: Not Supported 00:20:38.914 00:20:38.914 Controller Memory Buffer Support 00:20:38.914 ================================ 00:20:38.914 Supported: No 00:20:38.914 00:20:38.914 Persistent Memory Region Support 00:20:38.914 ================================ 00:20:38.914 Supported: No 00:20:38.914 00:20:38.914 Admin Command Set Attributes 00:20:38.915 ============================ 00:20:38.915 Security Send/Receive: Not Supported 00:20:38.915 Format NVM: Not Supported 00:20:38.915 Firmware Activate/Download: Not Supported 00:20:38.915 Namespace Management: Not Supported 00:20:38.915 Device Self-Test: Not Supported 00:20:38.915 Directives: Not Supported 00:20:38.915 NVMe-MI: Not Supported 00:20:38.915 Virtualization Management: Not Supported 00:20:38.915 Doorbell Buffer Config: Not Supported 00:20:38.915 Get LBA Status Capability: Not Supported 00:20:38.915 Command & Feature Lockdown Capability: Not Supported 00:20:38.915 Abort Command Limit: 1 00:20:38.915 Async Event Request Limit: 4 00:20:38.915 Number of Firmware Slots: N/A 00:20:38.915 Firmware Slot 1 Read-Only: N/A 00:20:38.915 Firmware Activation Without Reset: N/A 00:20:38.915 Multiple Update Detection Support: N/A 00:20:38.915 Firmware Update Granularity: No Information Provided 00:20:38.915 Per-Namespace SMART Log: No 00:20:38.915 Asymmetric Namespace Access Log Page: Not Supported 00:20:38.915 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:38.915 Command Effects Log Page: Not Supported 00:20:38.915 Get Log Page Extended Data: Supported 00:20:38.915 Telemetry Log Pages: Not Supported 00:20:38.915 Persistent Event Log Pages: Not Supported 00:20:38.915 Supported Log Pages Log Page: May Support 00:20:38.915 Commands Supported & Effects Log Page: Not Supported 00:20:38.915 Feature Identifiers & Effects Log Page:May Support 00:20:38.915 NVMe-MI Commands & Effects Log Page: May Support 00:20:38.915 Data Area 4 for Telemetry Log: Not Supported 00:20:38.915 Error Log Page Entries Supported: 128 00:20:38.915 Keep Alive: Not Supported 00:20:38.915 00:20:38.915 NVM Command Set Attributes 00:20:38.915 ========================== 00:20:38.915 Submission Queue Entry Size 00:20:38.915 Max: 1 00:20:38.915 Min: 1 00:20:38.915 Completion Queue Entry Size 00:20:38.915 Max: 1 00:20:38.915 Min: 1 00:20:38.915 Number of Namespaces: 0 00:20:38.915 Compare Command: Not Supported 00:20:38.915 Write Uncorrectable Command: Not Supported 00:20:38.915 Dataset Management Command: Not Supported 00:20:38.915 Write Zeroes Command: Not Supported 00:20:38.915 Set Features Save Field: Not Supported 00:20:38.915 Reservations: Not Supported 00:20:38.915 Timestamp: Not Supported 00:20:38.915 Copy: Not Supported 00:20:38.915 Volatile Write Cache: Not Present 00:20:38.915 Atomic Write Unit (Normal): 1 00:20:38.915 Atomic Write Unit (PFail): 1 00:20:38.915 Atomic Compare & Write Unit: 1 00:20:38.915 Fused Compare & Write: Supported 00:20:38.915 Scatter-Gather List 00:20:38.915 SGL Command Set: Supported 00:20:38.915 SGL Keyed: Supported 00:20:38.915 SGL Bit Bucket Descriptor: Not Supported 00:20:38.915 SGL Metadata Pointer: Not Supported 00:20:38.915 Oversized SGL: Not Supported 00:20:38.915 SGL Metadata Address: Not Supported 00:20:38.915 SGL Offset: Supported 00:20:38.915 Transport SGL Data Block: Not Supported 00:20:38.915 Replay Protected Memory Block: Not Supported 00:20:38.915 00:20:38.915 Firmware Slot Information 00:20:38.915 ========================= 00:20:38.915 Active slot: 0 00:20:38.915 00:20:38.915 00:20:38.915 Error Log 00:20:38.915 ========= 00:20:38.915 00:20:38.915 Active Namespaces 00:20:38.915 ================= 00:20:38.915 Discovery Log Page 00:20:38.915 ================== 00:20:38.915 Generation Counter: 2 00:20:38.915 Number of Records: 2 00:20:38.915 Record Format: 0 00:20:38.915 00:20:38.915 Discovery Log Entry 0 00:20:38.915 ---------------------- 00:20:38.915 Transport Type: 3 (TCP) 00:20:38.915 Address Family: 1 (IPv4) 00:20:38.915 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:38.915 Entry Flags: 00:20:38.915 Duplicate Returned Information: 1 00:20:38.915 Explicit Persistent Connection Support for Discovery: 1 00:20:38.915 Transport Requirements: 00:20:38.915 Secure Channel: Not Required 00:20:38.915 Port ID: 0 (0x0000) 00:20:38.915 Controller ID: 65535 (0xffff) 00:20:38.915 Admin Max SQ Size: 128 00:20:38.915 Transport Service Identifier: 4420 00:20:38.915 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:38.915 Transport Address: 10.0.0.2 00:20:38.915 Discovery Log Entry 1 00:20:38.915 ---------------------- 00:20:38.915 Transport Type: 3 (TCP) 00:20:38.915 Address Family: 1 (IPv4) 00:20:38.915 Subsystem Type: 2 (NVM Subsystem) 00:20:38.915 Entry Flags: 00:20:38.915 Duplicate Returned Information: 0 00:20:38.915 Explicit Persistent Connection Support for Discovery: 0 00:20:38.915 Transport Requirements: 00:20:38.915 Secure Channel: Not Required 00:20:38.915 Port ID: 0 (0x0000) 00:20:38.915 Controller ID: 65535 (0xffff) 00:20:38.915 Admin Max SQ Size: 128 00:20:38.915 Transport Service Identifier: 4420 00:20:38.915 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:38.915 Transport Address: 10.0.0.2 [2024-07-15 16:37:18.277153] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:38.915 [2024-07-15 16:37:18.277175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb3c0) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.915 [2024-07-15 16:37:18.277195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb540) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.915 [2024-07-15 16:37:18.277211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb6c0) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.915 [2024-07-15 16:37:18.277227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.915 [2024-07-15 16:37:18.277252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.915 [2024-07-15 16:37:18.277279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.915 [2024-07-15 16:37:18.277318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.915 [2024-07-15 16:37:18.277464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.915 [2024-07-15 16:37:18.277480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.915 [2024-07-15 16:37:18.277487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.915 [2024-07-15 16:37:18.277530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.915 [2024-07-15 16:37:18.277557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.915 [2024-07-15 16:37:18.277695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.915 [2024-07-15 16:37:18.277707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.915 [2024-07-15 16:37:18.277714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277729] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:38.915 [2024-07-15 16:37:18.277740] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:38.915 [2024-07-15 16:37:18.277756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.915 [2024-07-15 16:37:18.277782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.915 [2024-07-15 16:37:18.277803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.915 [2024-07-15 16:37:18.277946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.915 [2024-07-15 16:37:18.277962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.915 [2024-07-15 16:37:18.277969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.277975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.277993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.278002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.278008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.915 [2024-07-15 16:37:18.278019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.915 [2024-07-15 16:37:18.278040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.915 [2024-07-15 16:37:18.278168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.915 [2024-07-15 16:37:18.278183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.915 [2024-07-15 16:37:18.278190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.915 [2024-07-15 16:37:18.278196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.915 [2024-07-15 16:37:18.278213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.278239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.278259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.278397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.278410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.278416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.278438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.278464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.278484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.278609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.278624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.278631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.278659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.278685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.278705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.278823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.278835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.278842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.278864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.278889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.278900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.278922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.279051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.279066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.279073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.279096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.279122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.279142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.279265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.279277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.279283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.279305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.279331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.279351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.279493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.279508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.279515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.279542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.279569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.279589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.279711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.279726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.279733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.279756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.279771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.279782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.279802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.283886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.283903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.283910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.283917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.283934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.283958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.283965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5b540) 00:20:38.916 [2024-07-15 16:37:18.283976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.916 [2024-07-15 16:37:18.283998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb840, cid 3, qid 0 00:20:38.916 [2024-07-15 16:37:18.284144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.284156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.284163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.284170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb840) on tqpair=0x1a5b540 00:20:38.916 [2024-07-15 16:37:18.284183] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:38.916 00:20:38.916 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:38.916 [2024-07-15 16:37:18.320186] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:38.916 [2024-07-15 16:37:18.320230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569906 ] 00:20:38.916 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.916 [2024-07-15 16:37:18.355793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:38.916 [2024-07-15 16:37:18.355847] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:38.916 [2024-07-15 16:37:18.355873] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:38.916 [2024-07-15 16:37:18.355896] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:38.916 [2024-07-15 16:37:18.355906] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:38.916 [2024-07-15 16:37:18.356127] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:38.916 [2024-07-15 16:37:18.356173] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x875540 0 00:20:38.916 [2024-07-15 16:37:18.362890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:38.916 [2024-07-15 16:37:18.362910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:38.916 [2024-07-15 16:37:18.362918] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:38.916 [2024-07-15 16:37:18.362924] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:38.916 [2024-07-15 16:37:18.362978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.362990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.362997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.916 [2024-07-15 16:37:18.363011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:38.916 [2024-07-15 16:37:18.363038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.916 [2024-07-15 16:37:18.369906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.916 [2024-07-15 16:37:18.369925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.916 [2024-07-15 16:37:18.369934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.916 [2024-07-15 16:37:18.369941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.916 [2024-07-15 16:37:18.369955] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:38.916 [2024-07-15 16:37:18.369966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:38.916 [2024-07-15 16:37:18.369976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:38.917 [2024-07-15 16:37:18.369994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.370021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.370045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.370208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.370224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.370231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.370246] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:38.917 [2024-07-15 16:37:18.370260] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:38.917 [2024-07-15 16:37:18.370272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.370302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.370323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.370447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.370460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.370467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.370482] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:38.917 [2024-07-15 16:37:18.370495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.370507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.370531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.370552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.370684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.370697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.370704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.370719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.370735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.370761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.370781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.370916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.370932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.370939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.370946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.370953] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:38.917 [2024-07-15 16:37:18.370961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.370975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.371085] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:38.917 [2024-07-15 16:37:18.371092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.371108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.371133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.371155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.371315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.371330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.371338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.371352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:38.917 [2024-07-15 16:37:18.371369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.371395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.371415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.371540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.371552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.371559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.371573] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:38.917 [2024-07-15 16:37:18.371581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:38.917 [2024-07-15 16:37:18.371595] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:38.917 [2024-07-15 16:37:18.371609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:38.917 [2024-07-15 16:37:18.371629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.371648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.917 [2024-07-15 16:37:18.371668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.371840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.917 [2024-07-15 16:37:18.371853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.917 [2024-07-15 16:37:18.371860] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371866] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=4096, cccid=0 00:20:38.917 [2024-07-15 16:37:18.371874] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d53c0) on tqpair(0x875540): expected_datao=0, payload_size=4096 00:20:38.917 [2024-07-15 16:37:18.371891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371913] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.371923] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.415889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.415909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.415916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.415923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.415935] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:38.917 [2024-07-15 16:37:18.415948] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:38.917 [2024-07-15 16:37:18.415957] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:38.917 [2024-07-15 16:37:18.415964] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:38.917 [2024-07-15 16:37:18.415972] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:38.917 [2024-07-15 16:37:18.415980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:38.917 [2024-07-15 16:37:18.415995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:38.917 [2024-07-15 16:37:18.416008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.416034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.917 [2024-07-15 16:37:18.416057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.917 [2024-07-15 16:37:18.416214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.917 [2024-07-15 16:37:18.416230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.917 [2024-07-15 16:37:18.416238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.917 [2024-07-15 16:37:18.416255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x875540) 00:20:38.917 [2024-07-15 16:37:18.416279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.917 [2024-07-15 16:37:18.416289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.917 [2024-07-15 16:37:18.416302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.416311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.918 [2024-07-15 16:37:18.416321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.416343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.918 [2024-07-15 16:37:18.416352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.416394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.918 [2024-07-15 16:37:18.416403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.416421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.416434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.416451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.918 [2024-07-15 16:37:18.416473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d53c0, cid 0, qid 0 00:20:38.918 [2024-07-15 16:37:18.416499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5540, cid 1, qid 0 00:20:38.918 [2024-07-15 16:37:18.416507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d56c0, cid 2, qid 0 00:20:38.918 [2024-07-15 16:37:18.416515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.918 [2024-07-15 16:37:18.416523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.918 [2024-07-15 16:37:18.416708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.918 [2024-07-15 16:37:18.416721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.918 [2024-07-15 16:37:18.416728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.918 [2024-07-15 16:37:18.416742] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:38.918 [2024-07-15 16:37:18.416751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.416765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.416776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.416787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.416816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.416826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.918 [2024-07-15 16:37:18.416847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.918 [2024-07-15 16:37:18.417022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.918 [2024-07-15 16:37:18.417038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.918 [2024-07-15 16:37:18.417045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.918 [2024-07-15 16:37:18.417117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.417174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.918 [2024-07-15 16:37:18.417195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.918 [2024-07-15 16:37:18.417372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.918 [2024-07-15 16:37:18.417388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.918 [2024-07-15 16:37:18.417395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417402] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=4096, cccid=4 00:20:38.918 [2024-07-15 16:37:18.417410] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d59c0) on tqpair(0x875540): expected_datao=0, payload_size=4096 00:20:38.918 [2024-07-15 16:37:18.417417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417435] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.918 [2024-07-15 16:37:18.417468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.918 [2024-07-15 16:37:18.417475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.918 [2024-07-15 16:37:18.417496] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:38.918 [2024-07-15 16:37:18.417517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.417567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.918 [2024-07-15 16:37:18.417588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.918 [2024-07-15 16:37:18.417746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.918 [2024-07-15 16:37:18.417761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.918 [2024-07-15 16:37:18.417768] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417775] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=4096, cccid=4 00:20:38.918 [2024-07-15 16:37:18.417783] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d59c0) on tqpair(0x875540): expected_datao=0, payload_size=4096 00:20:38.918 [2024-07-15 16:37:18.417790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417800] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417808] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.918 [2024-07-15 16:37:18.417842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.918 [2024-07-15 16:37:18.417849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.918 [2024-07-15 16:37:18.417890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:38.918 [2024-07-15 16:37:18.417926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.417934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.918 [2024-07-15 16:37:18.417945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.918 [2024-07-15 16:37:18.417966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.918 [2024-07-15 16:37:18.418107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.918 [2024-07-15 16:37:18.418120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.918 [2024-07-15 16:37:18.418127] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.418133] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=4096, cccid=4 00:20:38.918 [2024-07-15 16:37:18.418141] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d59c0) on tqpair(0x875540): expected_datao=0, payload_size=4096 00:20:38.918 [2024-07-15 16:37:18.418148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.418158] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.418166] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.918 [2024-07-15 16:37:18.418194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.918 [2024-07-15 16:37:18.418205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.918 [2024-07-15 16:37:18.418212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.418231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418297] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:38.919 [2024-07-15 16:37:18.418305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:38.919 [2024-07-15 16:37:18.418314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:38.919 [2024-07-15 16:37:18.418334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.418353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.418370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.418410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.919 [2024-07-15 16:37:18.418434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.919 [2024-07-15 16:37:18.418460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5b40, cid 5, qid 0 00:20:38.919 [2024-07-15 16:37:18.418627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.418639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.418646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.418663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.418673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.418679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5b40) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.418701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.418710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.418720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.418741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5b40, cid 5, qid 0 00:20:38.919 [2024-07-15 16:37:18.418996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.419011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.419018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5b40) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.419040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5b40, cid 5, qid 0 00:20:38.919 [2024-07-15 16:37:18.419209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.419224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.419232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5b40) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.419254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5b40, cid 5, qid 0 00:20:38.919 [2024-07-15 16:37:18.419419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.419431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.419438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5b40) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.419473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.419585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x875540) 00:20:38.919 [2024-07-15 16:37:18.419595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.919 [2024-07-15 16:37:18.419616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5b40, cid 5, qid 0 00:20:38.919 [2024-07-15 16:37:18.419627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d59c0, cid 4, qid 0 00:20:38.919 [2024-07-15 16:37:18.419649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5cc0, cid 6, qid 0 00:20:38.919 [2024-07-15 16:37:18.419657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5e40, cid 7, qid 0 00:20:38.919 [2024-07-15 16:37:18.423908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.919 [2024-07-15 16:37:18.423925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.919 [2024-07-15 16:37:18.423932] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.423938] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=8192, cccid=5 00:20:38.919 [2024-07-15 16:37:18.423946] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d5b40) on tqpair(0x875540): expected_datao=0, payload_size=8192 00:20:38.919 [2024-07-15 16:37:18.423953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.423963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.423970] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.423979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.919 [2024-07-15 16:37:18.423987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.919 [2024-07-15 16:37:18.423993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424000] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=512, cccid=4 00:20:38.919 [2024-07-15 16:37:18.424007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d59c0) on tqpair(0x875540): expected_datao=0, payload_size=512 00:20:38.919 [2024-07-15 16:37:18.424014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.919 [2024-07-15 16:37:18.424050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.919 [2024-07-15 16:37:18.424057] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424063] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=512, cccid=6 00:20:38.919 [2024-07-15 16:37:18.424070] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d5cc0) on tqpair(0x875540): expected_datao=0, payload_size=512 00:20:38.919 [2024-07-15 16:37:18.424078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424086] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424093] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.919 [2024-07-15 16:37:18.424110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.919 [2024-07-15 16:37:18.424116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424122] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x875540): datao=0, datal=4096, cccid=7 00:20:38.919 [2024-07-15 16:37:18.424129] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d5e40) on tqpair(0x875540): expected_datao=0, payload_size=4096 00:20:38.919 [2024-07-15 16:37:18.424136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424145] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.424169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.424175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5b40) on tqpair=0x875540 00:20:38.919 [2024-07-15 16:37:18.424215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.919 [2024-07-15 16:37:18.424226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.919 [2024-07-15 16:37:18.424232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.919 [2024-07-15 16:37:18.424239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d59c0) on tqpair=0x875540 00:20:38.920 [2024-07-15 16:37:18.424253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.920 [2024-07-15 16:37:18.424263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.920 [2024-07-15 16:37:18.424269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.920 [2024-07-15 16:37:18.424275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5cc0) on tqpair=0x875540 00:20:38.920 [2024-07-15 16:37:18.424285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.920 [2024-07-15 16:37:18.424295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.920 [2024-07-15 16:37:18.424301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.920 [2024-07-15 16:37:18.424307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5e40) on tqpair=0x875540 00:20:38.920 ===================================================== 00:20:38.920 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.920 ===================================================== 00:20:38.920 Controller Capabilities/Features 00:20:38.920 ================================ 00:20:38.920 Vendor ID: 8086 00:20:38.920 Subsystem Vendor ID: 8086 00:20:38.920 Serial Number: SPDK00000000000001 00:20:38.920 Model Number: SPDK bdev Controller 00:20:38.920 Firmware Version: 24.09 00:20:38.920 Recommended Arb Burst: 6 00:20:38.920 IEEE OUI Identifier: e4 d2 5c 00:20:38.920 Multi-path I/O 00:20:38.920 May have multiple subsystem ports: Yes 00:20:38.920 May have multiple controllers: Yes 00:20:38.920 Associated with SR-IOV VF: No 00:20:38.920 Max Data Transfer Size: 131072 00:20:38.920 Max Number of Namespaces: 32 00:20:38.920 Max Number of I/O Queues: 127 00:20:38.920 NVMe Specification Version (VS): 1.3 00:20:38.920 NVMe Specification Version (Identify): 1.3 00:20:38.920 Maximum Queue Entries: 128 00:20:38.920 Contiguous Queues Required: Yes 00:20:38.920 Arbitration Mechanisms Supported 00:20:38.920 Weighted Round Robin: Not Supported 00:20:38.920 Vendor Specific: Not Supported 00:20:38.920 Reset Timeout: 15000 ms 00:20:38.920 Doorbell Stride: 4 bytes 00:20:38.920 NVM Subsystem Reset: Not Supported 00:20:38.920 Command Sets Supported 00:20:38.920 NVM Command Set: Supported 00:20:38.920 Boot Partition: Not Supported 00:20:38.920 Memory Page Size Minimum: 4096 bytes 00:20:38.920 Memory Page Size Maximum: 4096 bytes 00:20:38.920 Persistent Memory Region: Not Supported 00:20:38.920 Optional Asynchronous Events Supported 00:20:38.920 Namespace Attribute Notices: Supported 00:20:38.920 Firmware Activation Notices: Not Supported 00:20:38.920 ANA Change Notices: Not Supported 00:20:38.920 PLE Aggregate Log Change Notices: Not Supported 00:20:38.920 LBA Status Info Alert Notices: Not Supported 00:20:38.920 EGE Aggregate Log Change Notices: Not Supported 00:20:38.920 Normal NVM Subsystem Shutdown event: Not Supported 00:20:38.920 Zone Descriptor Change Notices: Not Supported 00:20:38.920 Discovery Log Change Notices: Not Supported 00:20:38.920 Controller Attributes 00:20:38.920 128-bit Host Identifier: Supported 00:20:38.920 Non-Operational Permissive Mode: Not Supported 00:20:38.920 NVM Sets: Not Supported 00:20:38.920 Read Recovery Levels: Not Supported 00:20:38.920 Endurance Groups: Not Supported 00:20:38.920 Predictable Latency Mode: Not Supported 00:20:38.920 Traffic Based Keep ALive: Not Supported 00:20:38.920 Namespace Granularity: Not Supported 00:20:38.920 SQ Associations: Not Supported 00:20:38.920 UUID List: Not Supported 00:20:38.920 Multi-Domain Subsystem: Not Supported 00:20:38.920 Fixed Capacity Management: Not Supported 00:20:38.920 Variable Capacity Management: Not Supported 00:20:38.920 Delete Endurance Group: Not Supported 00:20:38.920 Delete NVM Set: Not Supported 00:20:38.920 Extended LBA Formats Supported: Not Supported 00:20:38.920 Flexible Data Placement Supported: Not Supported 00:20:38.920 00:20:38.920 Controller Memory Buffer Support 00:20:38.920 ================================ 00:20:38.920 Supported: No 00:20:38.920 00:20:38.920 Persistent Memory Region Support 00:20:38.920 ================================ 00:20:38.920 Supported: No 00:20:38.920 00:20:38.920 Admin Command Set Attributes 00:20:38.920 ============================ 00:20:38.920 Security Send/Receive: Not Supported 00:20:38.920 Format NVM: Not Supported 00:20:38.920 Firmware Activate/Download: Not Supported 00:20:38.920 Namespace Management: Not Supported 00:20:38.920 Device Self-Test: Not Supported 00:20:38.920 Directives: Not Supported 00:20:38.920 NVMe-MI: Not Supported 00:20:38.920 Virtualization Management: Not Supported 00:20:38.920 Doorbell Buffer Config: Not Supported 00:20:38.920 Get LBA Status Capability: Not Supported 00:20:38.920 Command & Feature Lockdown Capability: Not Supported 00:20:38.920 Abort Command Limit: 4 00:20:38.920 Async Event Request Limit: 4 00:20:38.920 Number of Firmware Slots: N/A 00:20:38.920 Firmware Slot 1 Read-Only: N/A 00:20:38.920 Firmware Activation Without Reset: N/A 00:20:38.920 Multiple Update Detection Support: N/A 00:20:38.920 Firmware Update Granularity: No Information Provided 00:20:38.920 Per-Namespace SMART Log: No 00:20:38.920 Asymmetric Namespace Access Log Page: Not Supported 00:20:38.920 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:38.920 Command Effects Log Page: Supported 00:20:38.920 Get Log Page Extended Data: Supported 00:20:38.920 Telemetry Log Pages: Not Supported 00:20:38.920 Persistent Event Log Pages: Not Supported 00:20:38.920 Supported Log Pages Log Page: May Support 00:20:38.920 Commands Supported & Effects Log Page: Not Supported 00:20:38.920 Feature Identifiers & Effects Log Page:May Support 00:20:38.920 NVMe-MI Commands & Effects Log Page: May Support 00:20:38.920 Data Area 4 for Telemetry Log: Not Supported 00:20:38.920 Error Log Page Entries Supported: 128 00:20:38.920 Keep Alive: Supported 00:20:38.920 Keep Alive Granularity: 10000 ms 00:20:38.920 00:20:38.920 NVM Command Set Attributes 00:20:38.920 ========================== 00:20:38.920 Submission Queue Entry Size 00:20:38.920 Max: 64 00:20:38.920 Min: 64 00:20:38.920 Completion Queue Entry Size 00:20:38.920 Max: 16 00:20:38.920 Min: 16 00:20:38.920 Number of Namespaces: 32 00:20:38.920 Compare Command: Supported 00:20:38.920 Write Uncorrectable Command: Not Supported 00:20:38.920 Dataset Management Command: Supported 00:20:38.920 Write Zeroes Command: Supported 00:20:38.920 Set Features Save Field: Not Supported 00:20:38.920 Reservations: Supported 00:20:38.920 Timestamp: Not Supported 00:20:38.920 Copy: Supported 00:20:38.920 Volatile Write Cache: Present 00:20:38.920 Atomic Write Unit (Normal): 1 00:20:38.920 Atomic Write Unit (PFail): 1 00:20:38.920 Atomic Compare & Write Unit: 1 00:20:38.920 Fused Compare & Write: Supported 00:20:38.920 Scatter-Gather List 00:20:38.920 SGL Command Set: Supported 00:20:38.920 SGL Keyed: Supported 00:20:38.920 SGL Bit Bucket Descriptor: Not Supported 00:20:38.920 SGL Metadata Pointer: Not Supported 00:20:38.920 Oversized SGL: Not Supported 00:20:38.920 SGL Metadata Address: Not Supported 00:20:38.920 SGL Offset: Supported 00:20:38.920 Transport SGL Data Block: Not Supported 00:20:38.920 Replay Protected Memory Block: Not Supported 00:20:38.920 00:20:38.920 Firmware Slot Information 00:20:38.920 ========================= 00:20:38.920 Active slot: 1 00:20:38.920 Slot 1 Firmware Revision: 24.09 00:20:38.920 00:20:38.920 00:20:38.920 Commands Supported and Effects 00:20:38.920 ============================== 00:20:38.920 Admin Commands 00:20:38.920 -------------- 00:20:38.920 Get Log Page (02h): Supported 00:20:38.920 Identify (06h): Supported 00:20:38.920 Abort (08h): Supported 00:20:38.920 Set Features (09h): Supported 00:20:38.920 Get Features (0Ah): Supported 00:20:38.920 Asynchronous Event Request (0Ch): Supported 00:20:38.920 Keep Alive (18h): Supported 00:20:38.920 I/O Commands 00:20:38.920 ------------ 00:20:38.920 Flush (00h): Supported LBA-Change 00:20:38.920 Write (01h): Supported LBA-Change 00:20:38.920 Read (02h): Supported 00:20:38.921 Compare (05h): Supported 00:20:38.921 Write Zeroes (08h): Supported LBA-Change 00:20:38.921 Dataset Management (09h): Supported LBA-Change 00:20:38.921 Copy (19h): Supported LBA-Change 00:20:38.921 00:20:38.921 Error Log 00:20:38.921 ========= 00:20:38.921 00:20:38.921 Arbitration 00:20:38.921 =========== 00:20:38.921 Arbitration Burst: 1 00:20:38.921 00:20:38.921 Power Management 00:20:38.921 ================ 00:20:38.921 Number of Power States: 1 00:20:38.921 Current Power State: Power State #0 00:20:38.921 Power State #0: 00:20:38.921 Max Power: 0.00 W 00:20:38.921 Non-Operational State: Operational 00:20:38.921 Entry Latency: Not Reported 00:20:38.921 Exit Latency: Not Reported 00:20:38.921 Relative Read Throughput: 0 00:20:38.921 Relative Read Latency: 0 00:20:38.921 Relative Write Throughput: 0 00:20:38.921 Relative Write Latency: 0 00:20:38.921 Idle Power: Not Reported 00:20:38.921 Active Power: Not Reported 00:20:38.921 Non-Operational Permissive Mode: Not Supported 00:20:38.921 00:20:38.921 Health Information 00:20:38.921 ================== 00:20:38.921 Critical Warnings: 00:20:38.921 Available Spare Space: OK 00:20:38.921 Temperature: OK 00:20:38.921 Device Reliability: OK 00:20:38.921 Read Only: No 00:20:38.921 Volatile Memory Backup: OK 00:20:38.921 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:38.921 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:38.921 Available Spare: 0% 00:20:38.921 Available Spare Threshold: 0% 00:20:38.921 Life Percentage Used:[2024-07-15 16:37:18.424443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.424455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.424466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.424489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5e40, cid 7, qid 0 00:20:38.921 [2024-07-15 16:37:18.424670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.424686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.424693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.424704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5e40) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.424753] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:38.921 [2024-07-15 16:37:18.424773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d53c0) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.424784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.921 [2024-07-15 16:37:18.424793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5540) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.424800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.921 [2024-07-15 16:37:18.424808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d56c0) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.424831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.921 [2024-07-15 16:37:18.424839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.424846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.921 [2024-07-15 16:37:18.424858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.424866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.424872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.424908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.424932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.425092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.425105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.425112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.425130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.425154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.425179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.425323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.425338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.425345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.425360] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:38.921 [2024-07-15 16:37:18.425368] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:38.921 [2024-07-15 16:37:18.425384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.425410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.425434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.425558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.425574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.425581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.425604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.425631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.425652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.425774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.425789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.425796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.425819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.425835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.425846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.425866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.425998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.426012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.426019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.426042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.426068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.426089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.426211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.426227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.426234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.426257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.921 [2024-07-15 16:37:18.426283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.921 [2024-07-15 16:37:18.426304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.921 [2024-07-15 16:37:18.426430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.921 [2024-07-15 16:37:18.426445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.921 [2024-07-15 16:37:18.426452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.921 [2024-07-15 16:37:18.426458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.921 [2024-07-15 16:37:18.426475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.426501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.426522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.426644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.426659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.426666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.426689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.426715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.426736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.426858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.426873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.426890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.426914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.426930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.426941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.426962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.427086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.427098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.427105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.427127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.427153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.427173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.427298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.427316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.427324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.427347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.427374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.427395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.427518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.427533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.427540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.427563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.427589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.427610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.427733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.427747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.427754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.427776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.427792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.427802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.427822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.431892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.431908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.431915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.431922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.431954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.431964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.431971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x875540) 00:20:38.922 [2024-07-15 16:37:18.431982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.922 [2024-07-15 16:37:18.432004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d5840, cid 3, qid 0 00:20:38.922 [2024-07-15 16:37:18.432163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.922 [2024-07-15 16:37:18.432179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.922 [2024-07-15 16:37:18.432186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.922 [2024-07-15 16:37:18.432196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d5840) on tqpair=0x875540 00:20:38.922 [2024-07-15 16:37:18.432210] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:38.922 0% 00:20:38.922 Data Units Read: 0 00:20:38.922 Data Units Written: 0 00:20:38.922 Host Read Commands: 0 00:20:38.922 Host Write Commands: 0 00:20:38.922 Controller Busy Time: 0 minutes 00:20:38.922 Power Cycles: 0 00:20:38.922 Power On Hours: 0 hours 00:20:38.922 Unsafe Shutdowns: 0 00:20:38.922 Unrecoverable Media Errors: 0 00:20:38.922 Lifetime Error Log Entries: 0 00:20:38.922 Warning Temperature Time: 0 minutes 00:20:38.922 Critical Temperature Time: 0 minutes 00:20:38.922 00:20:38.922 Number of Queues 00:20:38.922 ================ 00:20:38.922 Number of I/O Submission Queues: 127 00:20:38.922 Number of I/O Completion Queues: 127 00:20:38.922 00:20:38.922 Active Namespaces 00:20:38.922 ================= 00:20:38.922 Namespace ID:1 00:20:38.922 Error Recovery Timeout: Unlimited 00:20:38.922 Command Set Identifier: NVM (00h) 00:20:38.922 Deallocate: Supported 00:20:38.922 Deallocated/Unwritten Error: Not Supported 00:20:38.922 Deallocated Read Value: Unknown 00:20:38.922 Deallocate in Write Zeroes: Not Supported 00:20:38.922 Deallocated Guard Field: 0xFFFF 00:20:38.922 Flush: Supported 00:20:38.922 Reservation: Supported 00:20:38.922 Namespace Sharing Capabilities: Multiple Controllers 00:20:38.922 Size (in LBAs): 131072 (0GiB) 00:20:38.922 Capacity (in LBAs): 131072 (0GiB) 00:20:38.922 Utilization (in LBAs): 131072 (0GiB) 00:20:38.922 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:38.922 EUI64: ABCDEF0123456789 00:20:38.922 UUID: ca62769d-c252-4249-858d-73d5299fe145 00:20:38.922 Thin Provisioning: Not Supported 00:20:38.922 Per-NS Atomic Units: Yes 00:20:38.922 Atomic Boundary Size (Normal): 0 00:20:38.922 Atomic Boundary Size (PFail): 0 00:20:38.922 Atomic Boundary Offset: 0 00:20:38.922 Maximum Single Source Range Length: 65535 00:20:38.922 Maximum Copy Length: 65535 00:20:38.922 Maximum Source Range Count: 1 00:20:38.922 NGUID/EUI64 Never Reused: No 00:20:38.922 Namespace Write Protected: No 00:20:38.922 Number of LBA Formats: 1 00:20:38.922 Current LBA Format: LBA Format #00 00:20:38.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:38.922 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:38.922 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.923 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.923 rmmod nvme_tcp 00:20:38.923 rmmod nvme_fabrics 00:20:38.923 rmmod nvme_keyring 00:20:39.229 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.229 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:39.229 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1569880 ']' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1569880 ']' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1569880' 00:20:39.230 killing process with pid 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1569880 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.230 16:37:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.765 16:37:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.765 00:20:41.765 real 0m5.429s 00:20:41.765 user 0m4.394s 00:20:41.765 sys 0m1.930s 00:20:41.765 16:37:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.765 16:37:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.765 ************************************ 00:20:41.765 END TEST nvmf_identify 00:20:41.765 ************************************ 00:20:41.765 16:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:41.765 16:37:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:41.765 16:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:41.765 16:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.765 16:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.765 ************************************ 00:20:41.765 START TEST nvmf_perf 00:20:41.765 ************************************ 00:20:41.765 16:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:41.766 * Looking for test storage... 00:20:41.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.766 16:37:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:43.669 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.670 16:37:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:20:43.670 00:20:43.670 --- 10.0.0.2 ping statistics --- 00:20:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.670 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:43.670 00:20:43.670 --- 10.0.0.1 ping statistics --- 00:20:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.670 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1571954 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1571954 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1571954 ']' 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.670 16:37:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.670 [2024-07-15 16:37:23.154383] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:20:43.670 [2024-07-15 16:37:23.154451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.670 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.670 [2024-07-15 16:37:23.221017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.929 [2024-07-15 16:37:23.337256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.929 [2024-07-15 16:37:23.337318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.929 [2024-07-15 16:37:23.337334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.929 [2024-07-15 16:37:23.337356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.929 [2024-07-15 16:37:23.337368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.929 [2024-07-15 16:37:23.337449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.929 [2024-07-15 16:37:23.337521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.929 [2024-07-15 16:37:23.337798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.929 [2024-07-15 16:37:23.337802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:44.865 16:37:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:48.156 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.414 [2024-07-15 16:37:27.967060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.414 16:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.671 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:48.671 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.929 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:48.929 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:49.186 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.443 [2024-07-15 16:37:28.974771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.443 16:37:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:49.700 16:37:29 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:49.700 16:37:29 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:49.700 16:37:29 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:49.700 16:37:29 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:51.074 Initializing NVMe Controllers 00:20:51.074 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:51.074 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:51.074 Initialization complete. Launching workers. 00:20:51.074 ======================================================== 00:20:51.074 Latency(us) 00:20:51.074 Device Information : IOPS MiB/s Average min max 00:20:51.074 PCIE (0000:88:00.0) NSID 1 from core 0: 84504.38 330.10 378.18 43.77 7268.64 00:20:51.074 ======================================================== 00:20:51.074 Total : 84504.38 330.10 378.18 43.77 7268.64 00:20:51.074 00:20:51.074 16:37:30 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.074 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.448 Initializing NVMe Controllers 00:20:52.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:52.448 Initialization complete. Launching workers. 00:20:52.448 ======================================================== 00:20:52.448 Latency(us) 00:20:52.448 Device Information : IOPS MiB/s Average min max 00:20:52.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 139.00 0.54 7428.26 187.92 46064.52 00:20:52.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 84.00 0.33 11993.87 4960.99 54883.17 00:20:52.448 ======================================================== 00:20:52.448 Total : 223.00 0.87 9148.04 187.92 54883.17 00:20:52.448 00:20:52.448 16:37:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.448 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.821 Initializing NVMe Controllers 00:20:53.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.821 Initialization complete. Launching workers. 00:20:53.821 ======================================================== 00:20:53.821 Latency(us) 00:20:53.821 Device Information : IOPS MiB/s Average min max 00:20:53.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8443.97 32.98 3801.45 545.92 7570.75 00:20:53.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3863.99 15.09 8319.35 5490.73 17295.33 00:20:53.821 ======================================================== 00:20:53.821 Total : 12307.96 48.08 5219.81 545.92 17295.33 00:20:53.821 00:20:53.821 16:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:53.821 16:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:53.821 16:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.821 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.354 Initializing NVMe Controllers 00:20:56.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.354 Controller IO queue size 128, less than required. 00:20:56.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.354 Controller IO queue size 128, less than required. 00:20:56.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.354 Initialization complete. Launching workers. 00:20:56.354 ======================================================== 00:20:56.354 Latency(us) 00:20:56.354 Device Information : IOPS MiB/s Average min max 00:20:56.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1046.31 261.58 125613.69 69568.97 193911.39 00:20:56.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.89 152.72 214680.36 126499.69 325799.63 00:20:56.354 ======================================================== 00:20:56.354 Total : 1657.21 414.30 158446.11 69568.97 325799.63 00:20:56.355 00:20:56.355 16:37:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:56.355 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.643 No valid NVMe controllers or AIO or URING devices found 00:20:56.644 Initializing NVMe Controllers 00:20:56.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.644 Controller IO queue size 128, less than required. 00:20:56.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.644 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:56.644 Controller IO queue size 128, less than required. 00:20:56.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.644 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:56.644 WARNING: Some requested NVMe devices were skipped 00:20:56.644 16:37:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:56.644 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.175 Initializing NVMe Controllers 00:20:59.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.175 Controller IO queue size 128, less than required. 00:20:59.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.175 Controller IO queue size 128, less than required. 00:20:59.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:59.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:59.175 Initialization complete. Launching workers. 00:20:59.176 00:20:59.176 ==================== 00:20:59.176 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:59.176 TCP transport: 00:20:59.176 polls: 29871 00:20:59.176 idle_polls: 9585 00:20:59.176 sock_completions: 20286 00:20:59.176 nvme_completions: 3637 00:20:59.176 submitted_requests: 5464 00:20:59.176 queued_requests: 1 00:20:59.176 00:20:59.176 ==================== 00:20:59.176 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:59.176 TCP transport: 00:20:59.176 polls: 29392 00:20:59.176 idle_polls: 8778 00:20:59.176 sock_completions: 20614 00:20:59.176 nvme_completions: 4209 00:20:59.176 submitted_requests: 6300 00:20:59.176 queued_requests: 1 00:20:59.176 ======================================================== 00:20:59.176 Latency(us) 00:20:59.176 Device Information : IOPS MiB/s Average min max 00:20:59.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 907.59 226.90 146542.47 94876.82 247080.85 00:20:59.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1050.37 262.59 124081.34 64639.31 174592.95 00:20:59.176 ======================================================== 00:20:59.176 Total : 1957.96 489.49 134492.95 64639.31 247080.85 00:20:59.176 00:20:59.433 16:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:59.433 16:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.693 rmmod nvme_tcp 00:20:59.693 rmmod nvme_fabrics 00:20:59.693 rmmod nvme_keyring 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1571954 ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1571954 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1571954 ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1571954 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571954 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571954' 00:20:59.693 killing process with pid 1571954 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1571954 00:20:59.693 16:37:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1571954 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.599 16:37:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.502 16:37:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.502 00:21:03.502 real 0m21.997s 00:21:03.502 user 1m9.328s 00:21:03.502 sys 0m4.874s 00:21:03.502 16:37:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.502 16:37:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:03.502 ************************************ 00:21:03.502 END TEST nvmf_perf 00:21:03.502 ************************************ 00:21:03.502 16:37:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:03.502 16:37:42 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:03.502 16:37:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:03.502 16:37:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.502 16:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.502 ************************************ 00:21:03.502 START TEST nvmf_fio_host 00:21:03.502 ************************************ 00:21:03.502 16:37:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:03.502 * Looking for test storage... 00:21:03.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.502 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.503 16:37:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:05.400 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:05.400 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:05.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:05.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.400 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.401 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.401 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.401 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.401 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.401 16:37:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.658 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.658 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.658 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:21:05.659 00:21:05.659 --- 10.0.0.2 ping statistics --- 00:21:05.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.659 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:05.659 00:21:05.659 --- 10.0.0.1 ping statistics --- 00:21:05.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.659 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1575929 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1575929 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1575929 ']' 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.659 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.659 [2024-07-15 16:37:45.146948] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:05.659 [2024-07-15 16:37:45.147034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.659 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.659 [2024-07-15 16:37:45.210108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.915 [2024-07-15 16:37:45.318244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.916 [2024-07-15 16:37:45.318294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.916 [2024-07-15 16:37:45.318322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.916 [2024-07-15 16:37:45.318334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.916 [2024-07-15 16:37:45.318344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.916 [2024-07-15 16:37:45.318423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.916 [2024-07-15 16:37:45.318484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.916 [2024-07-15 16:37:45.318763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.916 [2024-07-15 16:37:45.318767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.916 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.916 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:05.916 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:06.173 [2024-07-15 16:37:45.725741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.173 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:06.173 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.173 16:37:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.430 16:37:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:06.688 Malloc1 00:21:06.688 16:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.945 16:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:07.203 16:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.461 [2024-07-15 16:37:46.888813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.461 16:37:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:07.719 16:37:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:07.978 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:07.978 fio-3.35 00:21:07.978 Starting 1 thread 00:21:07.978 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.506 00:21:10.506 test: (groupid=0, jobs=1): err= 0: pid=1576284: Mon Jul 15 16:37:49 2024 00:21:10.506 read: IOPS=7902, BW=30.9MiB/s (32.4MB/s)(62.0MiB/2007msec) 00:21:10.506 slat (nsec): min=1947, max=133964, avg=2554.55, stdev=1731.22 00:21:10.506 clat (usec): min=3994, max=15450, avg=8973.97, stdev=703.36 00:21:10.506 lat (usec): min=4022, max=15452, avg=8976.52, stdev=703.28 00:21:10.506 clat percentiles (usec): 00:21:10.506 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:21:10.506 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:21:10.506 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:21:10.506 | 99.00th=[10683], 99.50th=[10814], 99.90th=[13435], 99.95th=[14222], 00:21:10.506 | 99.99th=[15401] 00:21:10.506 bw ( KiB/s): min=30392, max=32080, per=99.90%, avg=31580.00, stdev=795.74, samples=4 00:21:10.506 iops : min= 7598, max= 8020, avg=7895.00, stdev=198.93, samples=4 00:21:10.506 write: IOPS=7875, BW=30.8MiB/s (32.3MB/s)(61.7MiB/2007msec); 0 zone resets 00:21:10.506 slat (usec): min=2, max=118, avg= 2.63, stdev= 1.37 00:21:10.506 clat (usec): min=1538, max=13432, avg=7191.06, stdev=614.44 00:21:10.506 lat (usec): min=1545, max=13435, avg=7193.70, stdev=614.40 00:21:10.506 clat percentiles (usec): 00:21:10.506 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:21:10.506 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:21:10.506 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8094], 00:21:10.506 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[12387], 00:21:10.506 | 99.99th=[12649] 00:21:10.506 bw ( KiB/s): min=31400, max=31608, per=99.99%, avg=31502.00, stdev=113.40, samples=4 00:21:10.506 iops : min= 7850, max= 7902, avg=7875.50, stdev=28.35, samples=4 00:21:10.506 lat (msec) : 2=0.01%, 4=0.05%, 10=97.02%, 20=2.92% 00:21:10.506 cpu : usr=55.28%, sys=38.63%, ctx=75, majf=0, minf=41 00:21:10.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:10.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:10.506 issued rwts: total=15861,15807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:10.506 00:21:10.506 Run status group 0 (all jobs): 00:21:10.506 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.0MiB (65.0MB), run=2007-2007msec 00:21:10.506 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:10.506 16:37:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:10.506 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:10.506 fio-3.35 00:21:10.506 Starting 1 thread 00:21:10.506 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.032 00:21:13.032 test: (groupid=0, jobs=1): err= 0: pid=1576734: Mon Jul 15 16:37:52 2024 00:21:13.032 read: IOPS=8130, BW=127MiB/s (133MB/s)(255MiB/2007msec) 00:21:13.032 slat (usec): min=2, max=100, avg= 3.76, stdev= 1.65 00:21:13.032 clat (usec): min=2281, max=19871, avg=9428.58, stdev=2493.55 00:21:13.032 lat (usec): min=2285, max=19875, avg=9432.34, stdev=2493.58 00:21:13.032 clat percentiles (usec): 00:21:13.032 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7373], 00:21:13.032 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9765], 00:21:13.032 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12649], 95.00th=[14222], 00:21:13.032 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:21:13.032 | 99.99th=[17695] 00:21:13.032 bw ( KiB/s): min=59328, max=75136, per=51.54%, avg=67048.00, stdev=6461.85, samples=4 00:21:13.032 iops : min= 3708, max= 4696, avg=4190.50, stdev=403.87, samples=4 00:21:13.032 write: IOPS=4666, BW=72.9MiB/s (76.5MB/s)(137MiB/1874msec); 0 zone resets 00:21:13.032 slat (usec): min=30, max=194, avg=34.19, stdev= 5.75 00:21:13.032 clat (usec): min=4492, max=20221, avg=10907.27, stdev=1961.91 00:21:13.032 lat (usec): min=4523, max=20253, avg=10941.46, stdev=1962.39 00:21:13.032 clat percentiles (usec): 00:21:13.032 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:21:13.032 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:21:13.032 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13566], 95.00th=[14484], 00:21:13.032 | 99.00th=[16057], 99.50th=[16909], 99.90th=[18482], 99.95th=[19792], 00:21:13.032 | 99.99th=[20317] 00:21:13.032 bw ( KiB/s): min=61216, max=78400, per=93.28%, avg=69648.00, stdev=7036.24, samples=4 00:21:13.032 iops : min= 3826, max= 4900, avg=4353.00, stdev=439.77, samples=4 00:21:13.032 lat (msec) : 4=0.15%, 10=52.76%, 20=47.07%, 50=0.01% 00:21:13.032 cpu : usr=73.23%, sys=23.23%, ctx=32, majf=0, minf=57 00:21:13.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:13.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:13.032 issued rwts: total=16318,8745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:13.032 00:21:13.032 Run status group 0 (all jobs): 00:21:13.032 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (267MB), run=2007-2007msec 00:21:13.032 WRITE: bw=72.9MiB/s (76.5MB/s), 72.9MiB/s-72.9MiB/s (76.5MB/s-76.5MB/s), io=137MiB (143MB), run=1874-1874msec 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.032 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.032 rmmod nvme_tcp 00:21:13.032 rmmod nvme_fabrics 00:21:13.032 rmmod nvme_keyring 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1575929 ']' 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1575929 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1575929 ']' 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1575929 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1575929 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1575929' 00:21:13.288 killing process with pid 1575929 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1575929 00:21:13.288 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1575929 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.546 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.547 16:37:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.457 16:37:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.457 00:21:15.457 real 0m12.046s 00:21:15.457 user 0m35.454s 00:21:15.457 sys 0m4.099s 00:21:15.457 16:37:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.457 16:37:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.457 ************************************ 00:21:15.457 END TEST nvmf_fio_host 00:21:15.457 ************************************ 00:21:15.457 16:37:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:15.457 16:37:55 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:15.457 16:37:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:15.457 16:37:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.457 16:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.457 ************************************ 00:21:15.457 START TEST nvmf_failover 00:21:15.457 ************************************ 00:21:15.457 16:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:15.719 * Looking for test storage... 00:21:15.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.719 16:37:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.632 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:17.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:17.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:17.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:17.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.633 16:37:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:21:17.633 00:21:17.633 --- 10.0.0.2 ping statistics --- 00:21:17.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.633 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:17.633 00:21:17.633 --- 10.0.0.1 ping statistics --- 00:21:17.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.633 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1578926 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1578926 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1578926 ']' 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.633 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:17.633 [2024-07-15 16:37:57.204362] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:17.633 [2024-07-15 16:37:57.204457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.892 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.892 [2024-07-15 16:37:57.269313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:17.892 [2024-07-15 16:37:57.383614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.892 [2024-07-15 16:37:57.383666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.892 [2024-07-15 16:37:57.383694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.892 [2024-07-15 16:37:57.383706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.892 [2024-07-15 16:37:57.383715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.892 [2024-07-15 16:37:57.383801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.892 [2024-07-15 16:37:57.383867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.892 [2024-07-15 16:37:57.383870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.150 16:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:18.408 [2024-07-15 16:37:57.798024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.408 16:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:18.666 Malloc0 00:21:18.666 16:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.923 16:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.181 16:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.438 [2024-07-15 16:37:58.937218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.438 16:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.696 [2024-07-15 16:37:59.222029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:19.696 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.954 [2024-07-15 16:37:59.515127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1579218 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1579218 /var/tmp/bdevperf.sock 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1579218 ']' 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.954 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:20.523 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.523 16:37:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:20.523 16:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.781 NVMe0n1 00:21:20.781 16:38:00 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.348 00:21:21.348 16:38:00 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1579354 00:21:21.348 16:38:00 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.348 16:38:00 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:22.277 16:38:01 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.535 [2024-07-15 16:38:02.091857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30070 is same with the state(5) to be set 00:21:22.535 [2024-07-15 16:38:02.091969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30070 is same with the state(5) to be set 00:21:22.535 [2024-07-15 16:38:02.091985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30070 is same with the state(5) to be set 00:21:22.535 [2024-07-15 16:38:02.091998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30070 is same with the state(5) to be set 00:21:22.535 16:38:02 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:25.812 16:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.069 00:21:26.069 16:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:26.328 [2024-07-15 16:38:05.671511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31640 is same with the state(5) to be set 00:21:26.328 16:38:05 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:29.615 16:38:08 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.615 [2024-07-15 16:38:08.927676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.615 16:38:08 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:30.551 16:38:09 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:30.809 [2024-07-15 16:38:10.198230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 [2024-07-15 16:38:10.198449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e70 is same with the state(5) to be set 00:21:30.809 16:38:10 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1579354 00:21:37.402 0 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1579218 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1579218 ']' 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1579218 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.402 16:38:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1579218 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1579218' 00:21:37.402 killing process with pid 1579218 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1579218 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1579218 00:21:37.402 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.402 [2024-07-15 16:37:59.580085] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:37.402 [2024-07-15 16:37:59.580169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579218 ] 00:21:37.402 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.402 [2024-07-15 16:37:59.639905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.402 [2024-07-15 16:37:59.752098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.402 Running I/O for 15 seconds... 00:21:37.402 [2024-07-15 16:38:02.092333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.402 [2024-07-15 16:38:02.092373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.402 [2024-07-15 16:38:02.092405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.402 [2024-07-15 16:38:02.092432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.402 [2024-07-15 16:38:02.092459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d0f0 is same with the state(5) to be set 00:21:37.402 [2024-07-15 16:38:02.092544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.092980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.092994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.093007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.402 [2024-07-15 16:38:02.093022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.402 [2024-07-15 16:38:02.093035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.403 [2024-07-15 16:38:02.093732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.403 [2024-07-15 16:38:02.093911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.403 [2024-07-15 16:38:02.093925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.093940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.093953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.093968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.093996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.404 [2024-07-15 16:38:02.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.404 [2024-07-15 16:38:02.094792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.404 [2024-07-15 16:38:02.094807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.094834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.094925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.094954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.094983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.094997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.405 [2024-07-15 16:38:02.095736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.405 [2024-07-15 16:38:02.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.095977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.095990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:02.096161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.406 [2024-07-15 16:38:02.096327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:37.406 [2024-07-15 16:38:02.096371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:37.406 [2024-07-15 16:38:02.096383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81888 len:8 PRP1 0x0 PRP2 0x0 00:21:37.406 [2024-07-15 16:38:02.096396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:02.096455] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bc3390 was disconnected and freed. reset controller. 00:21:37.406 [2024-07-15 16:38:02.096473] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:37.406 [2024-07-15 16:38:02.096488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.406 [2024-07-15 16:38:02.099767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.406 [2024-07-15 16:38:02.099805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:37.406 [2024-07-15 16:38:02.132935] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.406 [2024-07-15 16:38:05.671715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:05.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:05.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:05.671872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:05.671913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.406 [2024-07-15 16:38:05.671943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.406 [2024-07-15 16:38:05.671956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.671972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.671986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.407 [2024-07-15 16:38:05.672222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.407 [2024-07-15 16:38:05.672693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.407 [2024-07-15 16:38:05.672708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.672977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.672993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.408 [2024-07-15 16:38:05.673430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-07-15 16:38:05.673458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-07-15 16:38:05.673486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-07-15 16:38:05.673515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.408 [2024-07-15 16:38:05.673543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.408 [2024-07-15 16:38:05.673558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.673978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.673993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.409 [2024-07-15 16:38:05.674471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.409 [2024-07-15 16:38:05.674490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.674971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.674986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.410 [2024-07-15 16:38:05.675388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.410 [2024-07-15 16:38:05.675403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:05.675416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:05.675444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:05.675471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:05.675499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:05.675526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:05.675554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67d80 is same with the state(5) to be set 00:21:37.411 [2024-07-15 16:38:05.675587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:37.411 [2024-07-15 16:38:05.675599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:37.411 [2024-07-15 16:38:05.675610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:21:37.411 [2024-07-15 16:38:05.675623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675685] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d67d80 was disconnected and freed. reset controller. 00:21:37.411 [2024-07-15 16:38:05.675702] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:37.411 [2024-07-15 16:38:05.675750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.411 [2024-07-15 16:38:05.675769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.411 [2024-07-15 16:38:05.675797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.411 [2024-07-15 16:38:05.675823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.411 [2024-07-15 16:38:05.675849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:05.675862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.411 [2024-07-15 16:38:05.675922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:37.411 [2024-07-15 16:38:05.679177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.411 [2024-07-15 16:38:05.753864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.411 [2024-07-15 16:38:10.198863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.198912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.198941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.198957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.198973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.198987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.199017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.199051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.199082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.199110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.411 [2024-07-15 16:38:10.199139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.411 [2024-07-15 16:38:10.199378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.411 [2024-07-15 16:38:10.199392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.199980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.199994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.412 [2024-07-15 16:38:10.200270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.412 [2024-07-15 16:38:10.200284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.413 [2024-07-15 16:38:10.200578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.200978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.413 [2024-07-15 16:38:10.200991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.413 [2024-07-15 16:38:10.201006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.414 [2024-07-15 16:38:10.201712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.414 [2024-07-15 16:38:10.201887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.414 [2024-07-15 16:38:10.201903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.201932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.201945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.201960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.201973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.201988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.415 [2024-07-15 16:38:10.202174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.415 [2024-07-15 16:38:10.202610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:37.415 [2024-07-15 16:38:10.202657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:37.415 [2024-07-15 16:38:10.202669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6544 len:8 PRP1 0x0 PRP2 0x0 00:21:37.415 [2024-07-15 16:38:10.202682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202740] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d67b70 was disconnected and freed. reset controller. 00:21:37.415 [2024-07-15 16:38:10.202758] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:37.415 [2024-07-15 16:38:10.202791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.415 [2024-07-15 16:38:10.202809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.415 [2024-07-15 16:38:10.202843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.415 [2024-07-15 16:38:10.202874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.415 [2024-07-15 16:38:10.202896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.415 [2024-07-15 16:38:10.202910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.416 [2024-07-15 16:38:10.202923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.416 [2024-07-15 16:38:10.206176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.416 [2024-07-15 16:38:10.206213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d0f0 (9): Bad file descriptor 00:21:37.416 [2024-07-15 16:38:10.374142] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.416 00:21:37.416 Latency(us) 00:21:37.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.416 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:37.416 Verification LBA range: start 0x0 length 0x4000 00:21:37.416 NVMe0n1 : 15.01 8481.82 33.13 717.11 0.00 13887.06 813.13 15146.10 00:21:37.416 =================================================================================================================== 00:21:37.416 Total : 8481.82 33.13 717.11 0.00 13887.06 813.13 15146.10 00:21:37.416 Received shutdown signal, test time was about 15.000000 seconds 00:21:37.416 00:21:37.416 Latency(us) 00:21:37.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.416 =================================================================================================================== 00:21:37.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1581200 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1581200 /var/tmp/bdevperf.sock 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1581200 ']' 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.416 [2024-07-15 16:38:16.889947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.416 16:38:16 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:37.696 [2024-07-15 16:38:17.154767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:37.696 16:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.953 NVMe0n1 00:21:37.953 16:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.518 00:21:38.518 16:38:17 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.775 00:21:38.775 16:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.775 16:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:39.032 16:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.288 16:38:18 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:42.574 16:38:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.574 16:38:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:42.574 16:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1581865 00:21:42.574 16:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:42.574 16:38:22 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1581865 00:21:43.949 0 00:21:43.949 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.949 [2024-07-15 16:38:16.337999] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:43.949 [2024-07-15 16:38:16.338083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581200 ] 00:21:43.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.949 [2024-07-15 16:38:16.396180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.949 [2024-07-15 16:38:16.502049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.949 [2024-07-15 16:38:18.849463] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:43.949 [2024-07-15 16:38:18.849554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.949 [2024-07-15 16:38:18.849576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.949 [2024-07-15 16:38:18.849606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.949 [2024-07-15 16:38:18.849620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.949 [2024-07-15 16:38:18.849635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.949 [2024-07-15 16:38:18.849648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.949 [2024-07-15 16:38:18.849662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.949 [2024-07-15 16:38:18.849675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.949 [2024-07-15 16:38:18.849688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.949 [2024-07-15 16:38:18.849730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.949 [2024-07-15 16:38:18.849761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12510f0 (9): Bad file descriptor 00:21:43.949 [2024-07-15 16:38:18.903701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:43.949 Running I/O for 1 seconds... 00:21:43.949 00:21:43.949 Latency(us) 00:21:43.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.949 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.949 Verification LBA range: start 0x0 length 0x4000 00:21:43.949 NVMe0n1 : 1.01 8284.04 32.36 0.00 0.00 15389.90 2063.17 21554.06 00:21:43.949 =================================================================================================================== 00:21:43.949 Total : 8284.04 32.36 0.00 0.00 15389.90 2063.17 21554.06 00:21:43.949 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:43.949 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:44.206 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.464 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.464 16:38:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:44.722 16:38:24 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.979 16:38:24 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1581200 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1581200 ']' 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1581200 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1581200 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1581200' 00:21:48.268 killing process with pid 1581200 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1581200 00:21:48.268 16:38:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1581200 00:21:48.525 16:38:27 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:48.525 16:38:27 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.784 rmmod nvme_tcp 00:21:48.784 rmmod nvme_fabrics 00:21:48.784 rmmod nvme_keyring 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1578926 ']' 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1578926 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1578926 ']' 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1578926 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1578926 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1578926' 00:21:48.784 killing process with pid 1578926 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1578926 00:21:48.784 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1578926 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.043 16:38:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.581 16:38:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.581 00:21:51.581 real 0m35.557s 00:21:51.581 user 2m5.512s 00:21:51.581 sys 0m5.944s 00:21:51.581 16:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.581 16:38:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:51.581 ************************************ 00:21:51.581 END TEST nvmf_failover 00:21:51.581 ************************************ 00:21:51.581 16:38:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.581 16:38:30 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:51.581 16:38:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.581 16:38:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.581 16:38:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.581 ************************************ 00:21:51.581 START TEST nvmf_host_discovery 00:21:51.581 ************************************ 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:51.581 * Looking for test storage... 00:21:51.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.581 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.582 16:38:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:53.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:53.483 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:53.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:53.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:53.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:53.484 00:21:53.484 --- 10.0.0.2 ping statistics --- 00:21:53.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.484 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:53.484 00:21:53.484 --- 10.0.0.1 ping statistics --- 00:21:53.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.484 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1584488 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1584488 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1584488 ']' 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.484 16:38:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.484 [2024-07-15 16:38:32.947285] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:53.484 [2024-07-15 16:38:32.947358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.484 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.484 [2024-07-15 16:38:33.016970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.743 [2024-07-15 16:38:33.134336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.743 [2024-07-15 16:38:33.134386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.743 [2024-07-15 16:38:33.134413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.743 [2024-07-15 16:38:33.134425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.743 [2024-07-15 16:38:33.134434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.743 [2024-07-15 16:38:33.134459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 [2024-07-15 16:38:33.271656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 [2024-07-15 16:38:33.279842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 null0 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 null1 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1584617 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1584617 /tmp/host.sock 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1584617 ']' 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:53.743 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.743 16:38:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.001 [2024-07-15 16:38:33.356339] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:21:54.001 [2024-07-15 16:38:33.356405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584617 ] 00:21:54.001 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.001 [2024-07-15 16:38:33.417464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.001 [2024-07-15 16:38:33.533213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:54.933 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:54.934 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 [2024-07-15 16:38:34.595448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:55.191 16:38:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:56.123 [2024-07-15 16:38:35.383071] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:56.123 [2024-07-15 16:38:35.383096] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:56.123 [2024-07-15 16:38:35.383117] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:56.123 [2024-07-15 16:38:35.469443] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:56.123 [2024-07-15 16:38:35.654717] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:56.123 [2024-07-15 16:38:35.654743] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.380 16:38:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.638 16:38:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.638 [2024-07-15 16:38:36.055835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.638 [2024-07-15 16:38:36.056406] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:56.638 [2024-07-15 16:38:36.056442] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.638 [2024-07-15 16:38:36.185308] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:56.638 16:38:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:56.935 [2024-07-15 16:38:36.407439] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:56.935 [2024-07-15 16:38:36.407466] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:56.935 [2024-07-15 16:38:36.407486] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.891 [2024-07-15 16:38:37.295778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.891 [2024-07-15 16:38:37.295816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.891 [2024-07-15 16:38:37.295835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.891 [2024-07-15 16:38:37.295850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.891 [2024-07-15 16:38:37.295871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.891 [2024-07-15 16:38:37.295895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.891 [2024-07-15 16:38:37.295927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.891 [2024-07-15 16:38:37.295941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.891 [2024-07-15 16:38:37.295953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.891 [2024-07-15 16:38:37.296385] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:57.891 [2024-07-15 16:38:37.296417] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:57.891 [2024-07-15 16:38:37.305946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.891 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.891 [2024-07-15 16:38:37.315985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.891 [2024-07-15 16:38:37.316312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.891 [2024-07-15 16:38:37.316345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.891 [2024-07-15 16:38:37.316364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.316390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.316413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.316429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.316445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.316482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 [2024-07-15 16:38:37.326059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.326249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.326282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.326298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.326320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.326341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.326354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.326366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.326384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 [2024-07-15 16:38:37.336127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.336379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.336410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.336428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.336452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.336475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.336490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.336505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.336526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:57.892 [2024-07-15 16:38:37.346223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.346464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.346496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.346514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.346539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.346561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.346577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.346592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.346619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:57.892 [2024-07-15 16:38:37.356306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.356511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.356543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.356561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.356585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.356608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.356624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.356639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.356660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 [2024-07-15 16:38:37.366387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.366572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.366603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.366622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.892 [2024-07-15 16:38:37.366646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.892 [2024-07-15 16:38:37.366670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.892 [2024-07-15 16:38:37.366686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.892 [2024-07-15 16:38:37.366701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.892 [2024-07-15 16:38:37.366722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.892 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.892 [2024-07-15 16:38:37.376463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.892 [2024-07-15 16:38:37.376695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.892 [2024-07-15 16:38:37.376726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2c00 with addr=10.0.0.2, port=4420 00:21:57.892 [2024-07-15 16:38:37.376744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2c00 is same with the state(5) to be set 00:21:57.893 [2024-07-15 16:38:37.376768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2c00 (9): Bad file descriptor 00:21:57.893 [2024-07-15 16:38:37.376790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.893 [2024-07-15 16:38:37.376806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.893 [2024-07-15 16:38:37.376830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.893 [2024-07-15 16:38:37.376853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.893 [2024-07-15 16:38:37.382944] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:57.893 [2024-07-15 16:38:37.382972] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.893 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.168 16:38:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.096 [2024-07-15 16:38:38.629624] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.096 [2024-07-15 16:38:38.629662] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.096 [2024-07-15 16:38:38.629684] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.351 [2024-07-15 16:38:38.716962] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:59.609 [2024-07-15 16:38:38.992053] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:59.609 [2024-07-15 16:38:38.992098] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 request: 00:21:59.609 { 00:21:59.609 "name": "nvme", 00:21:59.609 "trtype": "tcp", 00:21:59.609 "traddr": "10.0.0.2", 00:21:59.609 "adrfam": "ipv4", 00:21:59.609 "trsvcid": "8009", 00:21:59.609 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:59.609 "wait_for_attach": true, 00:21:59.609 "method": "bdev_nvme_start_discovery", 00:21:59.609 "req_id": 1 00:21:59.609 } 00:21:59.609 Got JSON-RPC error response 00:21:59.609 response: 00:21:59.609 { 00:21:59.609 "code": -17, 00:21:59.609 "message": "File exists" 00:21:59.609 } 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 request: 00:21:59.609 { 00:21:59.609 "name": "nvme_second", 00:21:59.609 "trtype": "tcp", 00:21:59.609 "traddr": "10.0.0.2", 00:21:59.609 "adrfam": "ipv4", 00:21:59.609 "trsvcid": "8009", 00:21:59.609 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:59.609 "wait_for_attach": true, 00:21:59.609 "method": "bdev_nvme_start_discovery", 00:21:59.609 "req_id": 1 00:21:59.609 } 00:21:59.609 Got JSON-RPC error response 00:21:59.609 response: 00:21:59.609 { 00:21:59.609 "code": -17, 00:21:59.609 "message": "File exists" 00:21:59.609 } 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:59.609 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.610 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:59.610 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.610 16:38:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.981 [2024-07-15 16:38:40.207555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.981 [2024-07-15 16:38:40.207623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbdc90 with addr=10.0.0.2, port=8010 00:22:00.981 [2024-07-15 16:38:40.207656] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:00.981 [2024-07-15 16:38:40.207671] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:00.981 [2024-07-15 16:38:40.207685] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:01.912 [2024-07-15 16:38:41.209956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.912 [2024-07-15 16:38:41.209991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbdc90 with addr=10.0.0.2, port=8010 00:22:01.912 [2024-07-15 16:38:41.210011] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:01.912 [2024-07-15 16:38:41.210024] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:01.912 [2024-07-15 16:38:41.210035] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:02.843 [2024-07-15 16:38:42.212168] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:02.843 request: 00:22:02.843 { 00:22:02.843 "name": "nvme_second", 00:22:02.843 "trtype": "tcp", 00:22:02.843 "traddr": "10.0.0.2", 00:22:02.843 "adrfam": "ipv4", 00:22:02.843 "trsvcid": "8010", 00:22:02.843 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:02.843 "wait_for_attach": false, 00:22:02.843 "attach_timeout_ms": 3000, 00:22:02.843 "method": "bdev_nvme_start_discovery", 00:22:02.843 "req_id": 1 00:22:02.843 } 00:22:02.843 Got JSON-RPC error response 00:22:02.843 response: 00:22:02.843 { 00:22:02.843 "code": -110, 00:22:02.843 "message": "Connection timed out" 00:22:02.843 } 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1584617 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.843 rmmod nvme_tcp 00:22:02.843 rmmod nvme_fabrics 00:22:02.843 rmmod nvme_keyring 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:02.843 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1584488 ']' 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1584488 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1584488 ']' 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1584488 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1584488 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1584488' 00:22:02.844 killing process with pid 1584488 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1584488 00:22:02.844 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1584488 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.101 16:38:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.633 00:22:05.633 real 0m14.043s 00:22:05.633 user 0m20.899s 00:22:05.633 sys 0m2.883s 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.633 ************************************ 00:22:05.633 END TEST nvmf_host_discovery 00:22:05.633 ************************************ 00:22:05.633 16:38:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:05.633 16:38:44 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:05.633 16:38:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:05.633 16:38:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.633 16:38:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.633 ************************************ 00:22:05.633 START TEST nvmf_host_multipath_status 00:22:05.633 ************************************ 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:05.633 * Looking for test storage... 00:22:05.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.633 16:38:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.528 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:07.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:07.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:07.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:07.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:07.529 00:22:07.529 --- 10.0.0.2 ping statistics --- 00:22:07.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.529 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:22:07.529 00:22:07.529 --- 10.0.0.1 ping statistics --- 00:22:07.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.529 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1587778 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1587778 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1587778 ']' 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.529 16:38:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.529 [2024-07-15 16:38:46.941288] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:22:07.529 [2024-07-15 16:38:46.941371] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.529 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.529 [2024-07-15 16:38:47.004802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:07.529 [2024-07-15 16:38:47.113864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.529 [2024-07-15 16:38:47.113928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.529 [2024-07-15 16:38:47.113942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.529 [2024-07-15 16:38:47.113953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.529 [2024-07-15 16:38:47.113963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.529 [2024-07-15 16:38:47.114018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.529 [2024-07-15 16:38:47.114023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1587778 00:22:07.787 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:08.046 [2024-07-15 16:38:47.541846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.046 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:08.304 Malloc0 00:22:08.304 16:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:08.562 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.820 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.077 [2024-07-15 16:38:48.618149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.077 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:09.334 [2024-07-15 16:38:48.902979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1588060 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1588060 /var/tmp/bdevperf.sock 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1588060 ']' 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.334 16:38:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:09.900 16:38:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.900 16:38:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:09.900 16:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:10.157 16:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:10.414 Nvme0n1 00:22:10.414 16:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:10.987 Nvme0n1 00:22:10.987 16:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:10.987 16:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:12.886 16:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:12.886 16:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:13.144 16:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:13.401 16:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:14.362 16:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:14.362 16:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.362 16:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.362 16:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.620 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.620 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.620 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.620 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.877 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.877 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.877 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.877 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.134 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.134 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.134 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.134 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.392 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.392 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.392 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.392 16:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.649 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.649 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.649 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.649 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.907 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.907 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:15.907 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.178 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.435 16:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:17.368 16:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:17.368 16:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.368 16:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.368 16:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.625 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.625 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:17.625 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.625 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.882 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.882 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.882 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.882 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.140 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.140 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.140 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.140 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.397 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.397 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.397 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.397 16:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.654 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.654 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:18.654 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.654 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.911 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.911 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:18.911 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:19.169 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:19.427 16:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:20.361 16:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:20.361 16:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:20.361 16:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.361 16:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.925 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.925 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.926 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.183 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.183 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.183 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.183 16:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.441 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.441 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:21.441 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.441 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.699 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.699 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:21.699 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.699 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.957 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.957 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:21.957 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.215 16:39:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:22.472 16:39:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.843 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.100 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.100 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.100 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.100 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.358 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.358 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.358 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.358 16:39:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.616 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.616 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.616 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.616 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.873 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.873 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:24.873 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.873 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.130 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.130 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:25.130 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:25.388 16:39:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:25.646 16:39:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:26.579 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:26.579 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:26.579 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.579 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.836 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.836 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:26.836 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.836 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.093 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.094 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.094 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.094 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.352 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.352 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.352 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.352 16:39:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.609 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.609 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:27.609 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.609 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.866 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.866 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:27.866 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.866 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.123 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.123 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:28.123 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:28.380 16:39:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:28.657 16:39:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:29.629 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:29.629 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:29.629 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.629 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:29.886 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.886 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:29.886 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.886 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.143 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.143 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.143 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.143 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.401 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.401 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.401 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.401 16:39:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:30.660 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.660 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:30.660 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.660 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:30.917 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.917 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:30.917 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.917 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.174 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.174 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:31.431 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:31.431 16:39:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:31.687 16:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:31.944 16:39:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:32.900 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:32.900 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:32.900 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.900 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.158 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.158 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.158 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.158 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.416 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.416 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.416 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.416 16:39:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.673 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.673 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.673 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.673 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:33.931 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.931 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:33.931 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.931 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.189 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.189 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.189 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.189 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.447 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.447 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:34.447 16:39:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:34.705 16:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:34.962 16:39:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:35.895 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:35.895 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:35.896 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.896 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.154 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.154 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:36.154 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.154 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:36.412 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.412 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:36.412 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.412 16:39:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.670 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.670 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.670 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.670 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.927 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.927 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:36.927 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.927 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:37.184 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.184 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:37.184 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.184 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:37.461 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.461 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:37.461 16:39:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:37.721 16:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:37.978 16:39:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:38.909 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:38.909 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:38.909 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.909 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:39.166 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.166 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:39.166 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.166 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.423 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.423 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.423 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.423 16:39:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.681 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.681 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.681 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.681 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:39.939 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.939 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:39.939 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.939 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:40.195 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.195 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:40.195 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.195 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.452 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.452 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:40.452 16:39:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:40.708 16:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:40.966 16:39:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:41.896 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:41.896 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:41.896 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.896 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.153 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.153 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:42.153 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.153 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:42.410 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.410 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:42.410 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.410 16:39:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:42.667 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.667 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:42.667 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.667 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:42.925 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.925 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:42.925 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.925 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.190 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.190 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:43.190 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.190 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1588060 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1588060 ']' 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1588060 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.448 16:39:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1588060 00:22:43.448 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:43.448 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:43.448 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1588060' 00:22:43.448 killing process with pid 1588060 00:22:43.448 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1588060 00:22:43.448 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1588060 00:22:43.709 Connection closed with partial response: 00:22:43.709 00:22:43.709 00:22:43.709 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1588060 00:22:43.709 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:43.709 [2024-07-15 16:38:48.967590] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:22:43.709 [2024-07-15 16:38:48.967682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588060 ] 00:22:43.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.709 [2024-07-15 16:38:49.026321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.709 [2024-07-15 16:38:49.139616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.709 Running I/O for 90 seconds... 00:22:43.709 [2024-07-15 16:39:04.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.810888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.810922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.811088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.811116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.811136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:43.709 [2024-07-15 16:39:04.811160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.709 [2024-07-15 16:39:04.811201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.811970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.811988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.812535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.812552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.710 [2024-07-15 16:39:04.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.710 [2024-07-15 16:39:04.813866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.710 [2024-07-15 16:39:04.813891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.813918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.813935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.813962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.813979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.814960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.814977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.711 [2024-07-15 16:39:04.815844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:43.711 [2024-07-15 16:39:04.815872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.815907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.815936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.815955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.815982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.816463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.816965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.816997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:04.817412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:04.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:04.817758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:20.432080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.712 [2024-07-15 16:39:20.432177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:20.432218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:20.432258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:20.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.712 [2024-07-15 16:39:20.432364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.712 [2024-07-15 16:39:20.432403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.432657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.432698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.432738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.432997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.433026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.433067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.433084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.433107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.433125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.433170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.434534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.434580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.434974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.434996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.435070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.435110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.713 [2024-07-15 16:39:20.435150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:43.713 [2024-07-15 16:39:20.435402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.713 [2024-07-15 16:39:20.435418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.714 [2024-07-15 16:39:20.435796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.714 [2024-07-15 16:39:20.435848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.714 [2024-07-15 16:39:20.435910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.714 [2024-07-15 16:39:20.435950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.435972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.714 [2024-07-15 16:39:20.435988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:43.714 [2024-07-15 16:39:20.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.714 [2024-07-15 16:39:20.436032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:43.714 Received shutdown signal, test time was about 32.508280 seconds 00:22:43.714 00:22:43.714 Latency(us) 00:22:43.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.714 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.714 Verification LBA range: start 0x0 length 0x4000 00:22:43.714 Nvme0n1 : 32.51 7887.18 30.81 0.00 0.00 16202.74 807.06 4026531.84 00:22:43.714 =================================================================================================================== 00:22:43.714 Total : 7887.18 30.81 0.00 0.00 16202.74 807.06 4026531.84 00:22:43.714 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.970 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.970 rmmod nvme_tcp 00:22:43.970 rmmod nvme_fabrics 00:22:44.227 rmmod nvme_keyring 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1587778 ']' 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1587778 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1587778 ']' 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1587778 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1587778 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.227 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1587778' 00:22:44.227 killing process with pid 1587778 00:22:44.228 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1587778 00:22:44.228 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1587778 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.486 16:39:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.388 16:39:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.388 00:22:46.388 real 0m41.213s 00:22:46.388 user 2m4.440s 00:22:46.388 sys 0m10.541s 00:22:46.388 16:39:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:46.388 16:39:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:46.388 ************************************ 00:22:46.388 END TEST nvmf_host_multipath_status 00:22:46.388 ************************************ 00:22:46.388 16:39:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:46.388 16:39:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:46.388 16:39:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:46.388 16:39:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.388 16:39:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.647 ************************************ 00:22:46.647 START TEST nvmf_discovery_remove_ifc 00:22:46.647 ************************************ 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:46.647 * Looking for test storage... 00:22:46.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.647 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.648 16:39:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:48.546 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:48.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:48.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:48.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:48.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.547 16:39:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:48.547 00:22:48.547 --- 10.0.0.2 ping statistics --- 00:22:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.547 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:48.547 00:22:48.547 --- 10.0.0.1 ping statistics --- 00:22:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.547 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.547 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1594766 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1594766 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1594766 ']' 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.805 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.806 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.806 [2024-07-15 16:39:28.206031] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:22:48.806 [2024-07-15 16:39:28.206119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.806 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.806 [2024-07-15 16:39:28.271815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.806 [2024-07-15 16:39:28.378315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.806 [2024-07-15 16:39:28.378368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.806 [2024-07-15 16:39:28.378391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.806 [2024-07-15 16:39:28.378402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.806 [2024-07-15 16:39:28.378412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.806 [2024-07-15 16:39:28.378436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.064 [2024-07-15 16:39:28.521316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.064 [2024-07-15 16:39:28.529508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:49.064 null0 00:22:49.064 [2024-07-15 16:39:28.561444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1594905 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1594905 /tmp/host.sock 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1594905 ']' 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:49.064 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.064 16:39:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.064 [2024-07-15 16:39:28.627320] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:22:49.064 [2024-07-15 16:39:28.627385] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594905 ] 00:22:49.064 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.322 [2024-07-15 16:39:28.689104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.322 [2024-07-15 16:39:28.805428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.253 16:39:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.185 [2024-07-15 16:39:30.741258] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:51.185 [2024-07-15 16:39:30.741300] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:51.185 [2024-07-15 16:39:30.741325] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:51.442 [2024-07-15 16:39:30.829612] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:51.442 [2024-07-15 16:39:31.013959] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:51.442 [2024-07-15 16:39:31.014022] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:51.442 [2024-07-15 16:39:31.014063] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:51.442 [2024-07-15 16:39:31.014085] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:51.442 [2024-07-15 16:39:31.014114] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.443 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.700 [2024-07-15 16:39:31.061627] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10c7870 was disconnected and freed. delete nvme_qpair. 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.700 16:39:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.633 16:39:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.005 16:39:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.936 16:39:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.866 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.867 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.867 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.867 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.867 16:39:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.798 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:56.799 16:39:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:57.056 [2024-07-15 16:39:36.454695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:57.056 [2024-07-15 16:39:36.454763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.056 [2024-07-15 16:39:36.454787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.056 [2024-07-15 16:39:36.454814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.056 [2024-07-15 16:39:36.454829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.056 [2024-07-15 16:39:36.454844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.056 [2024-07-15 16:39:36.454859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.056 [2024-07-15 16:39:36.454888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.056 [2024-07-15 16:39:36.454905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.056 [2024-07-15 16:39:36.454922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.056 [2024-07-15 16:39:36.454951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.056 [2024-07-15 16:39:36.454963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108e300 is same with the state(5) to be set 00:22:57.056 [2024-07-15 16:39:36.464712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108e300 (9): Bad file descriptor 00:22:57.056 [2024-07-15 16:39:36.474758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:57.988 [2024-07-15 16:39:37.490978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:57.988 [2024-07-15 16:39:37.491051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108e300 with addr=10.0.0.2, port=4420 00:22:57.988 [2024-07-15 16:39:37.491075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108e300 is same with the state(5) to be set 00:22:57.988 [2024-07-15 16:39:37.491115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108e300 (9): Bad file descriptor 00:22:57.988 [2024-07-15 16:39:37.491524] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:57.988 [2024-07-15 16:39:37.491555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:57.988 [2024-07-15 16:39:37.491572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:57.988 [2024-07-15 16:39:37.491586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:57.988 [2024-07-15 16:39:37.491617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:57.988 [2024-07-15 16:39:37.491639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:57.988 16:39:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.921 [2024-07-15 16:39:38.494146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:58.921 [2024-07-15 16:39:38.494209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:58.921 [2024-07-15 16:39:38.494238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:58.921 [2024-07-15 16:39:38.494255] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:58.921 [2024-07-15 16:39:38.494285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.921 [2024-07-15 16:39:38.494328] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:58.921 [2024-07-15 16:39:38.494377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.921 [2024-07-15 16:39:38.494401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.921 [2024-07-15 16:39:38.494420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.921 [2024-07-15 16:39:38.494436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.921 [2024-07-15 16:39:38.494452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.921 [2024-07-15 16:39:38.494468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.921 [2024-07-15 16:39:38.494484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.921 [2024-07-15 16:39:38.494500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.921 [2024-07-15 16:39:38.494515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.921 [2024-07-15 16:39:38.494529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.921 [2024-07-15 16:39:38.494544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:58.921 [2024-07-15 16:39:38.494676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108d780 (9): Bad file descriptor 00:22:58.921 [2024-07-15 16:39:38.495694] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:58.921 [2024-07-15 16:39:38.495722] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:59.179 16:39:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:00.141 16:39:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.072 [2024-07-15 16:39:40.547785] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:01.072 [2024-07-15 16:39:40.547831] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:01.072 [2024-07-15 16:39:40.547857] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.329 [2024-07-15 16:39:40.676290] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.329 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.330 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.330 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.330 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:01.330 16:39:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.330 [2024-07-15 16:39:40.902810] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:01.330 [2024-07-15 16:39:40.902863] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:01.330 [2024-07-15 16:39:40.902911] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:01.330 [2024-07-15 16:39:40.902951] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:01.330 [2024-07-15 16:39:40.902964] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.330 [2024-07-15 16:39:40.906271] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10a62a0 was disconnected and freed. delete nvme_qpair. 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1594905 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1594905 ']' 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1594905 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594905 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594905' 00:23:02.261 killing process with pid 1594905 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1594905 00:23:02.261 16:39:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1594905 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.519 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.519 rmmod nvme_tcp 00:23:02.519 rmmod nvme_fabrics 00:23:02.519 rmmod nvme_keyring 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1594766 ']' 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1594766 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1594766 ']' 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1594766 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1594766 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1594766' 00:23:02.777 killing process with pid 1594766 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1594766 00:23:02.777 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1594766 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.036 16:39:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.938 16:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.938 00:23:04.938 real 0m18.488s 00:23:04.938 user 0m27.511s 00:23:04.938 sys 0m3.053s 00:23:04.938 16:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.938 16:39:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.938 ************************************ 00:23:04.938 END TEST nvmf_discovery_remove_ifc 00:23:04.938 ************************************ 00:23:04.938 16:39:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:04.938 16:39:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:04.938 16:39:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:04.938 16:39:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.938 16:39:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.196 ************************************ 00:23:05.196 START TEST nvmf_identify_kernel_target 00:23:05.196 ************************************ 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:05.196 * Looking for test storage... 00:23:05.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.196 16:39:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:07.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.093 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:07.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:07.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:07.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:07.094 00:23:07.094 --- 10.0.0.2 ping statistics --- 00:23:07.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.094 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:07.094 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:23:07.351 00:23:07.351 --- 10.0.0.1 ping statistics --- 00:23:07.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.351 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:07.351 16:39:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:08.283 Waiting for block devices as requested 00:23:08.283 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:08.541 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:08.541 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:08.799 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:08.799 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:08.799 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:08.799 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:08.799 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:09.057 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:09.057 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:09.057 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:09.057 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:09.314 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:09.314 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:09.314 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:09.314 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:09.572 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:09.572 No valid GPT data, bailing 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:09.572 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:09.573 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:09.831 00:23:09.831 Discovery Log Number of Records 2, Generation counter 2 00:23:09.831 =====Discovery Log Entry 0====== 00:23:09.831 trtype: tcp 00:23:09.831 adrfam: ipv4 00:23:09.831 subtype: current discovery subsystem 00:23:09.831 treq: not specified, sq flow control disable supported 00:23:09.831 portid: 1 00:23:09.831 trsvcid: 4420 00:23:09.831 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:09.831 traddr: 10.0.0.1 00:23:09.831 eflags: none 00:23:09.831 sectype: none 00:23:09.831 =====Discovery Log Entry 1====== 00:23:09.831 trtype: tcp 00:23:09.831 adrfam: ipv4 00:23:09.831 subtype: nvme subsystem 00:23:09.831 treq: not specified, sq flow control disable supported 00:23:09.831 portid: 1 00:23:09.831 trsvcid: 4420 00:23:09.831 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:09.831 traddr: 10.0.0.1 00:23:09.831 eflags: none 00:23:09.831 sectype: none 00:23:09.831 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:09.831 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:09.831 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.831 ===================================================== 00:23:09.831 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:09.831 ===================================================== 00:23:09.831 Controller Capabilities/Features 00:23:09.831 ================================ 00:23:09.831 Vendor ID: 0000 00:23:09.831 Subsystem Vendor ID: 0000 00:23:09.831 Serial Number: d3d2e8f054ad155c0d7e 00:23:09.831 Model Number: Linux 00:23:09.831 Firmware Version: 6.7.0-68 00:23:09.831 Recommended Arb Burst: 0 00:23:09.831 IEEE OUI Identifier: 00 00 00 00:23:09.831 Multi-path I/O 00:23:09.831 May have multiple subsystem ports: No 00:23:09.831 May have multiple controllers: No 00:23:09.831 Associated with SR-IOV VF: No 00:23:09.831 Max Data Transfer Size: Unlimited 00:23:09.831 Max Number of Namespaces: 0 00:23:09.831 Max Number of I/O Queues: 1024 00:23:09.831 NVMe Specification Version (VS): 1.3 00:23:09.831 NVMe Specification Version (Identify): 1.3 00:23:09.831 Maximum Queue Entries: 1024 00:23:09.832 Contiguous Queues Required: No 00:23:09.832 Arbitration Mechanisms Supported 00:23:09.832 Weighted Round Robin: Not Supported 00:23:09.832 Vendor Specific: Not Supported 00:23:09.832 Reset Timeout: 7500 ms 00:23:09.832 Doorbell Stride: 4 bytes 00:23:09.832 NVM Subsystem Reset: Not Supported 00:23:09.832 Command Sets Supported 00:23:09.832 NVM Command Set: Supported 00:23:09.832 Boot Partition: Not Supported 00:23:09.832 Memory Page Size Minimum: 4096 bytes 00:23:09.832 Memory Page Size Maximum: 4096 bytes 00:23:09.832 Persistent Memory Region: Not Supported 00:23:09.832 Optional Asynchronous Events Supported 00:23:09.832 Namespace Attribute Notices: Not Supported 00:23:09.832 Firmware Activation Notices: Not Supported 00:23:09.832 ANA Change Notices: Not Supported 00:23:09.832 PLE Aggregate Log Change Notices: Not Supported 00:23:09.832 LBA Status Info Alert Notices: Not Supported 00:23:09.832 EGE Aggregate Log Change Notices: Not Supported 00:23:09.832 Normal NVM Subsystem Shutdown event: Not Supported 00:23:09.832 Zone Descriptor Change Notices: Not Supported 00:23:09.832 Discovery Log Change Notices: Supported 00:23:09.832 Controller Attributes 00:23:09.832 128-bit Host Identifier: Not Supported 00:23:09.832 Non-Operational Permissive Mode: Not Supported 00:23:09.832 NVM Sets: Not Supported 00:23:09.832 Read Recovery Levels: Not Supported 00:23:09.832 Endurance Groups: Not Supported 00:23:09.832 Predictable Latency Mode: Not Supported 00:23:09.832 Traffic Based Keep ALive: Not Supported 00:23:09.832 Namespace Granularity: Not Supported 00:23:09.832 SQ Associations: Not Supported 00:23:09.832 UUID List: Not Supported 00:23:09.832 Multi-Domain Subsystem: Not Supported 00:23:09.832 Fixed Capacity Management: Not Supported 00:23:09.832 Variable Capacity Management: Not Supported 00:23:09.832 Delete Endurance Group: Not Supported 00:23:09.832 Delete NVM Set: Not Supported 00:23:09.832 Extended LBA Formats Supported: Not Supported 00:23:09.832 Flexible Data Placement Supported: Not Supported 00:23:09.832 00:23:09.832 Controller Memory Buffer Support 00:23:09.832 ================================ 00:23:09.832 Supported: No 00:23:09.832 00:23:09.832 Persistent Memory Region Support 00:23:09.832 ================================ 00:23:09.832 Supported: No 00:23:09.832 00:23:09.832 Admin Command Set Attributes 00:23:09.832 ============================ 00:23:09.832 Security Send/Receive: Not Supported 00:23:09.832 Format NVM: Not Supported 00:23:09.832 Firmware Activate/Download: Not Supported 00:23:09.832 Namespace Management: Not Supported 00:23:09.832 Device Self-Test: Not Supported 00:23:09.832 Directives: Not Supported 00:23:09.832 NVMe-MI: Not Supported 00:23:09.832 Virtualization Management: Not Supported 00:23:09.832 Doorbell Buffer Config: Not Supported 00:23:09.832 Get LBA Status Capability: Not Supported 00:23:09.832 Command & Feature Lockdown Capability: Not Supported 00:23:09.832 Abort Command Limit: 1 00:23:09.832 Async Event Request Limit: 1 00:23:09.832 Number of Firmware Slots: N/A 00:23:09.832 Firmware Slot 1 Read-Only: N/A 00:23:09.832 Firmware Activation Without Reset: N/A 00:23:09.832 Multiple Update Detection Support: N/A 00:23:09.832 Firmware Update Granularity: No Information Provided 00:23:09.832 Per-Namespace SMART Log: No 00:23:09.832 Asymmetric Namespace Access Log Page: Not Supported 00:23:09.832 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:09.832 Command Effects Log Page: Not Supported 00:23:09.832 Get Log Page Extended Data: Supported 00:23:09.832 Telemetry Log Pages: Not Supported 00:23:09.832 Persistent Event Log Pages: Not Supported 00:23:09.832 Supported Log Pages Log Page: May Support 00:23:09.832 Commands Supported & Effects Log Page: Not Supported 00:23:09.832 Feature Identifiers & Effects Log Page:May Support 00:23:09.832 NVMe-MI Commands & Effects Log Page: May Support 00:23:09.832 Data Area 4 for Telemetry Log: Not Supported 00:23:09.832 Error Log Page Entries Supported: 1 00:23:09.832 Keep Alive: Not Supported 00:23:09.832 00:23:09.832 NVM Command Set Attributes 00:23:09.832 ========================== 00:23:09.832 Submission Queue Entry Size 00:23:09.832 Max: 1 00:23:09.832 Min: 1 00:23:09.832 Completion Queue Entry Size 00:23:09.832 Max: 1 00:23:09.832 Min: 1 00:23:09.832 Number of Namespaces: 0 00:23:09.832 Compare Command: Not Supported 00:23:09.832 Write Uncorrectable Command: Not Supported 00:23:09.832 Dataset Management Command: Not Supported 00:23:09.832 Write Zeroes Command: Not Supported 00:23:09.832 Set Features Save Field: Not Supported 00:23:09.832 Reservations: Not Supported 00:23:09.832 Timestamp: Not Supported 00:23:09.832 Copy: Not Supported 00:23:09.832 Volatile Write Cache: Not Present 00:23:09.832 Atomic Write Unit (Normal): 1 00:23:09.832 Atomic Write Unit (PFail): 1 00:23:09.832 Atomic Compare & Write Unit: 1 00:23:09.832 Fused Compare & Write: Not Supported 00:23:09.832 Scatter-Gather List 00:23:09.832 SGL Command Set: Supported 00:23:09.832 SGL Keyed: Not Supported 00:23:09.832 SGL Bit Bucket Descriptor: Not Supported 00:23:09.832 SGL Metadata Pointer: Not Supported 00:23:09.832 Oversized SGL: Not Supported 00:23:09.832 SGL Metadata Address: Not Supported 00:23:09.832 SGL Offset: Supported 00:23:09.832 Transport SGL Data Block: Not Supported 00:23:09.832 Replay Protected Memory Block: Not Supported 00:23:09.832 00:23:09.832 Firmware Slot Information 00:23:09.832 ========================= 00:23:09.832 Active slot: 0 00:23:09.832 00:23:09.832 00:23:09.832 Error Log 00:23:09.832 ========= 00:23:09.832 00:23:09.832 Active Namespaces 00:23:09.832 ================= 00:23:09.832 Discovery Log Page 00:23:09.832 ================== 00:23:09.832 Generation Counter: 2 00:23:09.832 Number of Records: 2 00:23:09.832 Record Format: 0 00:23:09.832 00:23:09.832 Discovery Log Entry 0 00:23:09.832 ---------------------- 00:23:09.832 Transport Type: 3 (TCP) 00:23:09.832 Address Family: 1 (IPv4) 00:23:09.832 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:09.832 Entry Flags: 00:23:09.832 Duplicate Returned Information: 0 00:23:09.832 Explicit Persistent Connection Support for Discovery: 0 00:23:09.832 Transport Requirements: 00:23:09.832 Secure Channel: Not Specified 00:23:09.832 Port ID: 1 (0x0001) 00:23:09.832 Controller ID: 65535 (0xffff) 00:23:09.832 Admin Max SQ Size: 32 00:23:09.832 Transport Service Identifier: 4420 00:23:09.832 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:09.832 Transport Address: 10.0.0.1 00:23:09.832 Discovery Log Entry 1 00:23:09.832 ---------------------- 00:23:09.832 Transport Type: 3 (TCP) 00:23:09.832 Address Family: 1 (IPv4) 00:23:09.832 Subsystem Type: 2 (NVM Subsystem) 00:23:09.832 Entry Flags: 00:23:09.832 Duplicate Returned Information: 0 00:23:09.832 Explicit Persistent Connection Support for Discovery: 0 00:23:09.832 Transport Requirements: 00:23:09.832 Secure Channel: Not Specified 00:23:09.832 Port ID: 1 (0x0001) 00:23:09.832 Controller ID: 65535 (0xffff) 00:23:09.832 Admin Max SQ Size: 32 00:23:09.832 Transport Service Identifier: 4420 00:23:09.832 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:09.832 Transport Address: 10.0.0.1 00:23:09.832 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:09.832 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.832 get_feature(0x01) failed 00:23:09.832 get_feature(0x02) failed 00:23:09.832 get_feature(0x04) failed 00:23:09.832 ===================================================== 00:23:09.832 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:09.832 ===================================================== 00:23:09.832 Controller Capabilities/Features 00:23:09.832 ================================ 00:23:09.832 Vendor ID: 0000 00:23:09.832 Subsystem Vendor ID: 0000 00:23:09.832 Serial Number: fcf511b6c2f34551579c 00:23:09.832 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:09.832 Firmware Version: 6.7.0-68 00:23:09.832 Recommended Arb Burst: 6 00:23:09.832 IEEE OUI Identifier: 00 00 00 00:23:09.832 Multi-path I/O 00:23:09.832 May have multiple subsystem ports: Yes 00:23:09.832 May have multiple controllers: Yes 00:23:09.832 Associated with SR-IOV VF: No 00:23:09.832 Max Data Transfer Size: Unlimited 00:23:09.832 Max Number of Namespaces: 1024 00:23:09.832 Max Number of I/O Queues: 128 00:23:09.832 NVMe Specification Version (VS): 1.3 00:23:09.832 NVMe Specification Version (Identify): 1.3 00:23:09.832 Maximum Queue Entries: 1024 00:23:09.832 Contiguous Queues Required: No 00:23:09.832 Arbitration Mechanisms Supported 00:23:09.832 Weighted Round Robin: Not Supported 00:23:09.832 Vendor Specific: Not Supported 00:23:09.832 Reset Timeout: 7500 ms 00:23:09.832 Doorbell Stride: 4 bytes 00:23:09.832 NVM Subsystem Reset: Not Supported 00:23:09.832 Command Sets Supported 00:23:09.832 NVM Command Set: Supported 00:23:09.832 Boot Partition: Not Supported 00:23:09.832 Memory Page Size Minimum: 4096 bytes 00:23:09.832 Memory Page Size Maximum: 4096 bytes 00:23:09.832 Persistent Memory Region: Not Supported 00:23:09.832 Optional Asynchronous Events Supported 00:23:09.832 Namespace Attribute Notices: Supported 00:23:09.833 Firmware Activation Notices: Not Supported 00:23:09.833 ANA Change Notices: Supported 00:23:09.833 PLE Aggregate Log Change Notices: Not Supported 00:23:09.833 LBA Status Info Alert Notices: Not Supported 00:23:09.833 EGE Aggregate Log Change Notices: Not Supported 00:23:09.833 Normal NVM Subsystem Shutdown event: Not Supported 00:23:09.833 Zone Descriptor Change Notices: Not Supported 00:23:09.833 Discovery Log Change Notices: Not Supported 00:23:09.833 Controller Attributes 00:23:09.833 128-bit Host Identifier: Supported 00:23:09.833 Non-Operational Permissive Mode: Not Supported 00:23:09.833 NVM Sets: Not Supported 00:23:09.833 Read Recovery Levels: Not Supported 00:23:09.833 Endurance Groups: Not Supported 00:23:09.833 Predictable Latency Mode: Not Supported 00:23:09.833 Traffic Based Keep ALive: Supported 00:23:09.833 Namespace Granularity: Not Supported 00:23:09.833 SQ Associations: Not Supported 00:23:09.833 UUID List: Not Supported 00:23:09.833 Multi-Domain Subsystem: Not Supported 00:23:09.833 Fixed Capacity Management: Not Supported 00:23:09.833 Variable Capacity Management: Not Supported 00:23:09.833 Delete Endurance Group: Not Supported 00:23:09.833 Delete NVM Set: Not Supported 00:23:09.833 Extended LBA Formats Supported: Not Supported 00:23:09.833 Flexible Data Placement Supported: Not Supported 00:23:09.833 00:23:09.833 Controller Memory Buffer Support 00:23:09.833 ================================ 00:23:09.833 Supported: No 00:23:09.833 00:23:09.833 Persistent Memory Region Support 00:23:09.833 ================================ 00:23:09.833 Supported: No 00:23:09.833 00:23:09.833 Admin Command Set Attributes 00:23:09.833 ============================ 00:23:09.833 Security Send/Receive: Not Supported 00:23:09.833 Format NVM: Not Supported 00:23:09.833 Firmware Activate/Download: Not Supported 00:23:09.833 Namespace Management: Not Supported 00:23:09.833 Device Self-Test: Not Supported 00:23:09.833 Directives: Not Supported 00:23:09.833 NVMe-MI: Not Supported 00:23:09.833 Virtualization Management: Not Supported 00:23:09.833 Doorbell Buffer Config: Not Supported 00:23:09.833 Get LBA Status Capability: Not Supported 00:23:09.833 Command & Feature Lockdown Capability: Not Supported 00:23:09.833 Abort Command Limit: 4 00:23:09.833 Async Event Request Limit: 4 00:23:09.833 Number of Firmware Slots: N/A 00:23:09.833 Firmware Slot 1 Read-Only: N/A 00:23:09.833 Firmware Activation Without Reset: N/A 00:23:09.833 Multiple Update Detection Support: N/A 00:23:09.833 Firmware Update Granularity: No Information Provided 00:23:09.833 Per-Namespace SMART Log: Yes 00:23:09.833 Asymmetric Namespace Access Log Page: Supported 00:23:09.833 ANA Transition Time : 10 sec 00:23:09.833 00:23:09.833 Asymmetric Namespace Access Capabilities 00:23:09.833 ANA Optimized State : Supported 00:23:09.833 ANA Non-Optimized State : Supported 00:23:09.833 ANA Inaccessible State : Supported 00:23:09.833 ANA Persistent Loss State : Supported 00:23:09.833 ANA Change State : Supported 00:23:09.833 ANAGRPID is not changed : No 00:23:09.833 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:09.833 00:23:09.833 ANA Group Identifier Maximum : 128 00:23:09.833 Number of ANA Group Identifiers : 128 00:23:09.833 Max Number of Allowed Namespaces : 1024 00:23:09.833 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:09.833 Command Effects Log Page: Supported 00:23:09.833 Get Log Page Extended Data: Supported 00:23:09.833 Telemetry Log Pages: Not Supported 00:23:09.833 Persistent Event Log Pages: Not Supported 00:23:09.833 Supported Log Pages Log Page: May Support 00:23:09.833 Commands Supported & Effects Log Page: Not Supported 00:23:09.833 Feature Identifiers & Effects Log Page:May Support 00:23:09.833 NVMe-MI Commands & Effects Log Page: May Support 00:23:09.833 Data Area 4 for Telemetry Log: Not Supported 00:23:09.833 Error Log Page Entries Supported: 128 00:23:09.833 Keep Alive: Supported 00:23:09.833 Keep Alive Granularity: 1000 ms 00:23:09.833 00:23:09.833 NVM Command Set Attributes 00:23:09.833 ========================== 00:23:09.833 Submission Queue Entry Size 00:23:09.833 Max: 64 00:23:09.833 Min: 64 00:23:09.833 Completion Queue Entry Size 00:23:09.833 Max: 16 00:23:09.833 Min: 16 00:23:09.833 Number of Namespaces: 1024 00:23:09.833 Compare Command: Not Supported 00:23:09.833 Write Uncorrectable Command: Not Supported 00:23:09.833 Dataset Management Command: Supported 00:23:09.833 Write Zeroes Command: Supported 00:23:09.833 Set Features Save Field: Not Supported 00:23:09.833 Reservations: Not Supported 00:23:09.833 Timestamp: Not Supported 00:23:09.833 Copy: Not Supported 00:23:09.833 Volatile Write Cache: Present 00:23:09.833 Atomic Write Unit (Normal): 1 00:23:09.833 Atomic Write Unit (PFail): 1 00:23:09.833 Atomic Compare & Write Unit: 1 00:23:09.833 Fused Compare & Write: Not Supported 00:23:09.833 Scatter-Gather List 00:23:09.833 SGL Command Set: Supported 00:23:09.833 SGL Keyed: Not Supported 00:23:09.833 SGL Bit Bucket Descriptor: Not Supported 00:23:09.833 SGL Metadata Pointer: Not Supported 00:23:09.833 Oversized SGL: Not Supported 00:23:09.833 SGL Metadata Address: Not Supported 00:23:09.833 SGL Offset: Supported 00:23:09.833 Transport SGL Data Block: Not Supported 00:23:09.833 Replay Protected Memory Block: Not Supported 00:23:09.833 00:23:09.833 Firmware Slot Information 00:23:09.833 ========================= 00:23:09.833 Active slot: 0 00:23:09.833 00:23:09.833 Asymmetric Namespace Access 00:23:09.833 =========================== 00:23:09.833 Change Count : 0 00:23:09.833 Number of ANA Group Descriptors : 1 00:23:09.833 ANA Group Descriptor : 0 00:23:09.833 ANA Group ID : 1 00:23:09.833 Number of NSID Values : 1 00:23:09.833 Change Count : 0 00:23:09.833 ANA State : 1 00:23:09.833 Namespace Identifier : 1 00:23:09.833 00:23:09.833 Commands Supported and Effects 00:23:09.833 ============================== 00:23:09.833 Admin Commands 00:23:09.833 -------------- 00:23:09.833 Get Log Page (02h): Supported 00:23:09.833 Identify (06h): Supported 00:23:09.833 Abort (08h): Supported 00:23:09.833 Set Features (09h): Supported 00:23:09.833 Get Features (0Ah): Supported 00:23:09.833 Asynchronous Event Request (0Ch): Supported 00:23:09.833 Keep Alive (18h): Supported 00:23:09.833 I/O Commands 00:23:09.833 ------------ 00:23:09.833 Flush (00h): Supported 00:23:09.833 Write (01h): Supported LBA-Change 00:23:09.833 Read (02h): Supported 00:23:09.833 Write Zeroes (08h): Supported LBA-Change 00:23:09.833 Dataset Management (09h): Supported 00:23:09.833 00:23:09.833 Error Log 00:23:09.833 ========= 00:23:09.833 Entry: 0 00:23:09.833 Error Count: 0x3 00:23:09.833 Submission Queue Id: 0x0 00:23:09.833 Command Id: 0x5 00:23:09.833 Phase Bit: 0 00:23:09.833 Status Code: 0x2 00:23:09.833 Status Code Type: 0x0 00:23:09.833 Do Not Retry: 1 00:23:09.833 Error Location: 0x28 00:23:09.833 LBA: 0x0 00:23:09.833 Namespace: 0x0 00:23:09.833 Vendor Log Page: 0x0 00:23:09.833 ----------- 00:23:09.833 Entry: 1 00:23:09.833 Error Count: 0x2 00:23:09.833 Submission Queue Id: 0x0 00:23:09.833 Command Id: 0x5 00:23:09.833 Phase Bit: 0 00:23:09.833 Status Code: 0x2 00:23:09.833 Status Code Type: 0x0 00:23:09.833 Do Not Retry: 1 00:23:09.833 Error Location: 0x28 00:23:09.833 LBA: 0x0 00:23:09.833 Namespace: 0x0 00:23:09.833 Vendor Log Page: 0x0 00:23:09.833 ----------- 00:23:09.833 Entry: 2 00:23:09.833 Error Count: 0x1 00:23:09.833 Submission Queue Id: 0x0 00:23:09.833 Command Id: 0x4 00:23:09.833 Phase Bit: 0 00:23:09.833 Status Code: 0x2 00:23:09.833 Status Code Type: 0x0 00:23:09.833 Do Not Retry: 1 00:23:09.833 Error Location: 0x28 00:23:09.833 LBA: 0x0 00:23:09.833 Namespace: 0x0 00:23:09.833 Vendor Log Page: 0x0 00:23:09.833 00:23:09.833 Number of Queues 00:23:09.833 ================ 00:23:09.833 Number of I/O Submission Queues: 128 00:23:09.833 Number of I/O Completion Queues: 128 00:23:09.833 00:23:09.833 ZNS Specific Controller Data 00:23:09.833 ============================ 00:23:09.833 Zone Append Size Limit: 0 00:23:09.833 00:23:09.833 00:23:09.833 Active Namespaces 00:23:09.833 ================= 00:23:09.833 get_feature(0x05) failed 00:23:09.833 Namespace ID:1 00:23:09.833 Command Set Identifier: NVM (00h) 00:23:09.833 Deallocate: Supported 00:23:09.833 Deallocated/Unwritten Error: Not Supported 00:23:09.833 Deallocated Read Value: Unknown 00:23:09.833 Deallocate in Write Zeroes: Not Supported 00:23:09.833 Deallocated Guard Field: 0xFFFF 00:23:09.833 Flush: Supported 00:23:09.833 Reservation: Not Supported 00:23:09.833 Namespace Sharing Capabilities: Multiple Controllers 00:23:09.833 Size (in LBAs): 1953525168 (931GiB) 00:23:09.833 Capacity (in LBAs): 1953525168 (931GiB) 00:23:09.833 Utilization (in LBAs): 1953525168 (931GiB) 00:23:09.833 UUID: 5b533394-1eec-42e3-bad2-f38a1205e2e4 00:23:09.833 Thin Provisioning: Not Supported 00:23:09.833 Per-NS Atomic Units: Yes 00:23:09.834 Atomic Boundary Size (Normal): 0 00:23:09.834 Atomic Boundary Size (PFail): 0 00:23:09.834 Atomic Boundary Offset: 0 00:23:09.834 NGUID/EUI64 Never Reused: No 00:23:09.834 ANA group ID: 1 00:23:09.834 Namespace Write Protected: No 00:23:09.834 Number of LBA Formats: 1 00:23:09.834 Current LBA Format: LBA Format #00 00:23:09.834 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:09.834 00:23:09.834 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:09.834 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.834 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.092 rmmod nvme_tcp 00:23:10.092 rmmod nvme_fabrics 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.092 16:39:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:11.991 16:39:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:13.362 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:13.362 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:13.362 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:14.319 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:14.319 00:23:14.319 real 0m9.287s 00:23:14.319 user 0m1.940s 00:23:14.319 sys 0m3.327s 00:23:14.319 16:39:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.319 16:39:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.319 ************************************ 00:23:14.319 END TEST nvmf_identify_kernel_target 00:23:14.319 ************************************ 00:23:14.319 16:39:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:14.319 16:39:53 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:14.319 16:39:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:14.319 16:39:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.319 16:39:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.319 ************************************ 00:23:14.319 START TEST nvmf_auth_host 00:23:14.319 ************************************ 00:23:14.319 16:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:14.576 * Looking for test storage... 00:23:14.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.576 16:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.473 16:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:16.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:16.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.473 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:16.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:16.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.474 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:23:16.767 00:23:16.767 --- 10.0.0.2 ping statistics --- 00:23:16.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.767 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:16.767 00:23:16.767 --- 10.0.0.1 ping statistics --- 00:23:16.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.767 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1602100 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1602100 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1602100 ']' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.767 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7578453582a9120612e9d915a0581637 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Io 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7578453582a9120612e9d915a0581637 0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7578453582a9120612e9d915a0581637 0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7578453582a9120612e9d915a0581637 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Io 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Io 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5Io 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bf3ed70aeeece90838e873c46b8f9ed5ecce2a20912c232f1d73ce84ddf555fd 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oKg 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bf3ed70aeeece90838e873c46b8f9ed5ecce2a20912c232f1d73ce84ddf555fd 3 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bf3ed70aeeece90838e873c46b8f9ed5ecce2a20912c232f1d73ce84ddf555fd 3 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bf3ed70aeeece90838e873c46b8f9ed5ecce2a20912c232f1d73ce84ddf555fd 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oKg 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oKg 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.oKg 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa8b0e2eba1e7bae4af5bd08bed9998ab4ec25cdea58bc41 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qVN 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa8b0e2eba1e7bae4af5bd08bed9998ab4ec25cdea58bc41 0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa8b0e2eba1e7bae4af5bd08bed9998ab4ec25cdea58bc41 0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa8b0e2eba1e7bae4af5bd08bed9998ab4ec25cdea58bc41 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:17.028 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qVN 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qVN 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qVN 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d77d1989bf964ab237f61303ff87e4891a1648423ef56ce9 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aG1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d77d1989bf964ab237f61303ff87e4891a1648423ef56ce9 2 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d77d1989bf964ab237f61303ff87e4891a1648423ef56ce9 2 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d77d1989bf964ab237f61303ff87e4891a1648423ef56ce9 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aG1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aG1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aG1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d800daf0ca20afdd976c57dc7ad8e0e 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xft 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d800daf0ca20afdd976c57dc7ad8e0e 1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d800daf0ca20afdd976c57dc7ad8e0e 1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d800daf0ca20afdd976c57dc7ad8e0e 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xft 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xft 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xft 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=570ca33263781b4f361991de19b781db 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Up5 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 570ca33263781b4f361991de19b781db 1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 570ca33263781b4f361991de19b781db 1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=570ca33263781b4f361991de19b781db 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Up5 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Up5 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Up5 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:17.287 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e482d420bba41f6343c789d1a9191c7dfc6855d5ae3838d4 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.52t 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e482d420bba41f6343c789d1a9191c7dfc6855d5ae3838d4 2 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e482d420bba41f6343c789d1a9191c7dfc6855d5ae3838d4 2 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e482d420bba41f6343c789d1a9191c7dfc6855d5ae3838d4 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.52t 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.52t 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.52t 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=726bec18132008a4d7af5fa9a61ea44a 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dEQ 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 726bec18132008a4d7af5fa9a61ea44a 0 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 726bec18132008a4d7af5fa9a61ea44a 0 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=726bec18132008a4d7af5fa9a61ea44a 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:17.288 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dEQ 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dEQ 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dEQ 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=584c3c51f0dfb35c143a46a26fab7ec9222b8c475275bd32d029b5ba2f1d7086 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wr8 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 584c3c51f0dfb35c143a46a26fab7ec9222b8c475275bd32d029b5ba2f1d7086 3 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 584c3c51f0dfb35c143a46a26fab7ec9222b8c475275bd32d029b5ba2f1d7086 3 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=584c3c51f0dfb35c143a46a26fab7ec9222b8c475275bd32d029b5ba2f1d7086 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wr8 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wr8 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wr8 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1602100 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1602100 ']' 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.545 16:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Io 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.oKg ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oKg 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qVN 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aG1 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aG1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xft 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Up5 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Up5 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.52t 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dEQ ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dEQ 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wr8 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:17.802 16:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:18.733 Waiting for block devices as requested 00:23:18.733 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:18.990 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:18.990 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:19.246 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:19.246 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:19.246 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:19.503 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:19.503 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:19.503 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:19.503 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:19.760 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:19.760 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:19.760 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:19.760 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:20.016 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:20.016 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:20.016 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:20.580 No valid GPT data, bailing 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:20.580 16:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:20.581 00:23:20.581 Discovery Log Number of Records 2, Generation counter 2 00:23:20.581 =====Discovery Log Entry 0====== 00:23:20.581 trtype: tcp 00:23:20.581 adrfam: ipv4 00:23:20.581 subtype: current discovery subsystem 00:23:20.581 treq: not specified, sq flow control disable supported 00:23:20.581 portid: 1 00:23:20.581 trsvcid: 4420 00:23:20.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:20.581 traddr: 10.0.0.1 00:23:20.581 eflags: none 00:23:20.581 sectype: none 00:23:20.581 =====Discovery Log Entry 1====== 00:23:20.581 trtype: tcp 00:23:20.581 adrfam: ipv4 00:23:20.581 subtype: nvme subsystem 00:23:20.581 treq: not specified, sq flow control disable supported 00:23:20.581 portid: 1 00:23:20.581 trsvcid: 4420 00:23:20.581 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:20.581 traddr: 10.0.0.1 00:23:20.581 eflags: none 00:23:20.581 sectype: none 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.581 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 nvme0n1 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.837 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.838 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.838 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.838 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.838 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.838 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.094 nvme0n1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.094 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.351 nvme0n1 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.351 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.351 nvme0n1 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.607 16:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.607 nvme0n1 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.607 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.865 nvme0n1 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.865 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 nvme0n1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.141 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.399 nvme0n1 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.399 16:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.657 nvme0n1 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.657 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.913 nvme0n1 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.913 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.914 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 nvme0n1 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.171 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.429 16:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.686 nvme0n1 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.686 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.687 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.944 nvme0n1 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.944 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.945 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.202 nvme0n1 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.202 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.460 16:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.718 nvme0n1 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.718 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.976 nvme0n1 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.976 16:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.541 nvme0n1 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.541 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.799 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.057 nvme0n1 00:23:26.057 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.057 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.057 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.057 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.057 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.314 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.315 16:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.879 nvme0n1 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.879 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.444 nvme0n1 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.444 16:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.010 nvme0n1 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.010 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.011 16:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.943 nvme0n1 00:23:28.943 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.943 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.943 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.943 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.943 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.944 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.201 16:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.133 nvme0n1 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.133 16:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.071 nvme0n1 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.071 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.371 16:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.305 nvme0n1 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.305 16:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.237 nvme0n1 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.237 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.495 nvme0n1 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.495 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.496 16:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.754 nvme0n1 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.754 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.011 nvme0n1 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.011 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.012 nvme0n1 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.012 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.269 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.270 nvme0n1 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.270 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.528 16:40:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.528 nvme0n1 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.528 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.786 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.787 nvme0n1 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.787 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 nvme0n1 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.045 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 nvme0n1 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.303 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.562 16:40:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.820 nvme0n1 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.820 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.821 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.078 nvme0n1 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.078 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.079 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.336 nvme0n1 00:23:36.336 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.336 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.337 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.594 16:40:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.852 nvme0n1 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.852 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.110 nvme0n1 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.110 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.674 nvme0n1 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.674 16:40:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.674 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.238 nvme0n1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.238 16:40:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 nvme0n1 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.803 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.804 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.804 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.804 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.369 nvme0n1 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.369 16:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.934 nvme0n1 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.934 16:40:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.498 nvme0n1 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.498 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.755 16:40:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 nvme0n1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.689 16:40:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.623 nvme0n1 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.623 16:40:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 nvme0n1 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.557 16:40:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.491 nvme0n1 00:23:44.491 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.491 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.491 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.491 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.491 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.748 16:40:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.681 nvme0n1 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.681 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.939 nvme0n1 00:23:45.939 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.939 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.939 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.939 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.939 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.940 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.205 nvme0n1 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.205 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.206 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 nvme0n1 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 nvme0n1 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.508 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:46.767 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.768 nvme0n1 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.768 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.028 nvme0n1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.028 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.288 nvme0n1 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.288 16:40:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.548 nvme0n1 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.548 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.808 nvme0n1 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.808 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.068 nvme0n1 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.068 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.328 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.587 nvme0n1 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.587 16:40:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.587 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 nvme0n1 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.846 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.104 nvme0n1 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.104 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.364 16:40:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.624 nvme0n1 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:49.624 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.625 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.883 nvme0n1 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.883 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.448 nvme0n1 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.448 16:40:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.448 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:50.449 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:50.449 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.708 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 nvme0n1 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.277 16:40:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.846 nvme0n1 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.846 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.413 nvme0n1 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.413 16:40:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.985 nvme0n1 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzU3ODQ1MzU4MmE5MTIwNjEyZTlkOTE1YTA1ODE2Mzdz/KDn: 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmYzZWQ3MGFlZWVjZTkwODM4ZTg3M2M0NmI4ZjllZDVlY2NlMmEyMDkxMmMyMzJmMWQ3M2NlODRkZGY1NTVmZF1FQVo=: 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.985 16:40:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 nvme0n1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.918 16:40:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.296 nvme0n1 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ4MDBkYWYwY2EyMGFmZGQ5NzZjNTdkYzdhZDhlMGUV3R3u: 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTcwY2EzMzI2Mzc4MWI0ZjM2MTk5MWRlMTliNzgxZGJO/pAk: 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.296 16:40:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.863 nvme0n1 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.863 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQ4MmQ0MjBiYmE0MWY2MzQzYzc4OWQxYTkxOTFjN2RmYzY4NTVkNWFlMzgzOGQ0bqKWSw==: 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzI2YmVjMTgxMzIwMDhhNGQ3YWY1ZmE5YTYxZWE0NGG761o8: 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.121 16:40:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.058 nvme0n1 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTg0YzNjNTFmMGRmYjM1YzE0M2E0NmEyNmZhYjdlYzkyMjJiOGM0NzUyNzViZDMyZDAyOWI1YmEyZjFkNzA4Nn4DksI=: 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.058 16:40:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.994 nvme0n1 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWE4YjBlMmViYTFlN2JhZTRhZjViZDA4YmVkOTk5OGFiNGVjMjVjZGVhNThiYzQxJbPknA==: 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc3ZDE5ODliZjk2NGFiMjM3ZjYxMzAzZmY4N2U0ODkxYTE2NDg0MjNlZjU2Y2U5tsdeBQ==: 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.994 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.994 request: 00:23:57.994 { 00:23:57.994 "name": "nvme0", 00:23:57.995 "trtype": "tcp", 00:23:57.995 "traddr": "10.0.0.1", 00:23:57.995 "adrfam": "ipv4", 00:23:57.995 "trsvcid": "4420", 00:23:57.995 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:57.995 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:57.995 "prchk_reftag": false, 00:23:57.995 "prchk_guard": false, 00:23:57.995 "hdgst": false, 00:23:57.995 "ddgst": false, 00:23:57.995 "method": "bdev_nvme_attach_controller", 00:23:57.995 "req_id": 1 00:23:57.995 } 00:23:57.995 Got JSON-RPC error response 00:23:57.995 response: 00:23:57.995 { 00:23:57.995 "code": -5, 00:23:57.995 "message": "Input/output error" 00:23:57.995 } 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.995 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.255 request: 00:23:58.255 { 00:23:58.255 "name": "nvme0", 00:23:58.255 "trtype": "tcp", 00:23:58.255 "traddr": "10.0.0.1", 00:23:58.255 "adrfam": "ipv4", 00:23:58.255 "trsvcid": "4420", 00:23:58.255 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:58.255 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:58.255 "prchk_reftag": false, 00:23:58.255 "prchk_guard": false, 00:23:58.255 "hdgst": false, 00:23:58.255 "ddgst": false, 00:23:58.255 "dhchap_key": "key2", 00:23:58.255 "method": "bdev_nvme_attach_controller", 00:23:58.255 "req_id": 1 00:23:58.255 } 00:23:58.255 Got JSON-RPC error response 00:23:58.255 response: 00:23:58.255 { 00:23:58.255 "code": -5, 00:23:58.255 "message": "Input/output error" 00:23:58.255 } 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.255 request: 00:23:58.255 { 00:23:58.255 "name": "nvme0", 00:23:58.255 "trtype": "tcp", 00:23:58.255 "traddr": "10.0.0.1", 00:23:58.255 "adrfam": "ipv4", 00:23:58.255 "trsvcid": "4420", 00:23:58.255 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:58.255 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:58.255 "prchk_reftag": false, 00:23:58.255 "prchk_guard": false, 00:23:58.255 "hdgst": false, 00:23:58.255 "ddgst": false, 00:23:58.255 "dhchap_key": "key1", 00:23:58.255 "dhchap_ctrlr_key": "ckey2", 00:23:58.255 "method": "bdev_nvme_attach_controller", 00:23:58.255 "req_id": 1 00:23:58.255 } 00:23:58.255 Got JSON-RPC error response 00:23:58.255 response: 00:23:58.255 { 00:23:58.255 "code": -5, 00:23:58.255 "message": "Input/output error" 00:23:58.255 } 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.255 rmmod nvme_tcp 00:23:58.255 rmmod nvme_fabrics 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1602100 ']' 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1602100 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1602100 ']' 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1602100 00:23:58.255 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602100 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602100' 00:23:58.515 killing process with pid 1602100 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1602100 00:23:58.515 16:40:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1602100 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.776 16:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:00.678 16:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:02.081 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:02.081 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:02.081 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:03.020 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:03.020 16:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5Io /tmp/spdk.key-null.qVN /tmp/spdk.key-sha256.xft /tmp/spdk.key-sha384.52t /tmp/spdk.key-sha512.wr8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:03.020 16:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:03.954 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:03.954 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:03.954 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:03.954 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:03.954 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:03.954 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:03.954 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:03.954 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:03.954 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:03.954 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:03.954 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:03.954 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:03.954 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:03.954 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:04.212 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:04.212 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:04.212 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:04.212 00:24:04.212 real 0m49.850s 00:24:04.212 user 0m47.621s 00:24:04.212 sys 0m5.713s 00:24:04.212 16:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.212 16:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.212 ************************************ 00:24:04.212 END TEST nvmf_auth_host 00:24:04.212 ************************************ 00:24:04.212 16:40:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.212 16:40:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:04.212 16:40:43 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:04.212 16:40:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.212 16:40:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.212 16:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.212 ************************************ 00:24:04.212 START TEST nvmf_digest 00:24:04.212 ************************************ 00:24:04.212 16:40:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:04.471 * Looking for test storage... 00:24:04.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.471 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.472 16:40:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.377 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:24:06.378 00:24:06.378 --- 10.0.0.2 ping statistics --- 00:24:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.378 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:24:06.378 00:24:06.378 --- 10.0.0.1 ping statistics --- 00:24:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.378 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.378 ************************************ 00:24:06.378 START TEST nvmf_digest_clean 00:24:06.378 ************************************ 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1611573 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1611573 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1611573 ']' 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.378 16:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.638 [2024-07-15 16:40:45.988668] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:06.638 [2024-07-15 16:40:45.988750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.638 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.638 [2024-07-15 16:40:46.052907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.638 [2024-07-15 16:40:46.161807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.638 [2024-07-15 16:40:46.161866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.638 [2024-07-15 16:40:46.161886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.638 [2024-07-15 16:40:46.161899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.638 [2024-07-15 16:40:46.161910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.638 [2024-07-15 16:40:46.161945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.638 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.896 null0 00:24:06.896 [2024-07-15 16:40:46.332624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.896 [2024-07-15 16:40:46.356857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1611592 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1611592 /var/tmp/bperf.sock 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1611592 ']' 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.896 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.896 [2024-07-15 16:40:46.407365] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:06.896 [2024-07-15 16:40:46.407444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611592 ] 00:24:06.896 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.896 [2024-07-15 16:40:46.473992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.154 [2024-07-15 16:40:46.590996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.154 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.154 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:07.154 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:07.154 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:07.154 16:40:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:07.720 16:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.720 16:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.979 nvme0n1 00:24:07.979 16:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:07.979 16:40:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.979 Running I/O for 2 seconds... 00:24:10.516 00:24:10.516 Latency(us) 00:24:10.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.516 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:10.516 nvme0n1 : 2.00 19791.44 77.31 0.00 0.00 6458.88 3519.53 11408.12 00:24:10.516 =================================================================================================================== 00:24:10.516 Total : 19791.44 77.31 0.00 0.00 6458.88 3519.53 11408.12 00:24:10.516 0 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:10.516 | select(.opcode=="crc32c") 00:24:10.516 | "\(.module_name) \(.executed)"' 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1611592 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1611592 ']' 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1611592 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611592 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611592' 00:24:10.516 killing process with pid 1611592 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1611592 00:24:10.516 Received shutdown signal, test time was about 2.000000 seconds 00:24:10.516 00:24:10.516 Latency(us) 00:24:10.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.516 =================================================================================================================== 00:24:10.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.516 16:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1611592 00:24:10.516 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:10.516 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1612002 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1612002 /var/tmp/bperf.sock 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1612002 ']' 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.517 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.775 [2024-07-15 16:40:50.147115] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:10.775 [2024-07-15 16:40:50.147198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612002 ] 00:24:10.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:10.775 Zero copy mechanism will not be used. 00:24:10.775 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.775 [2024-07-15 16:40:50.204925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.775 [2024-07-15 16:40:50.316362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.775 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.775 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:10.775 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:10.775 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:10.775 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:11.351 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.351 16:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.609 nvme0n1 00:24:11.609 16:40:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:11.609 16:40:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:11.866 Zero copy mechanism will not be used. 00:24:11.866 Running I/O for 2 seconds... 00:24:13.773 00:24:13.773 Latency(us) 00:24:13.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.773 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:13.773 nvme0n1 : 2.00 2931.56 366.44 0.00 0.00 5453.27 5121.52 7039.05 00:24:13.773 =================================================================================================================== 00:24:13.773 Total : 2931.56 366.44 0.00 0.00 5453.27 5121.52 7039.05 00:24:13.773 0 00:24:13.773 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:13.773 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:13.773 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:13.773 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:13.773 | select(.opcode=="crc32c") 00:24:13.773 | "\(.module_name) \(.executed)"' 00:24:13.773 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1612002 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1612002 ']' 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1612002 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612002 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612002' 00:24:14.031 killing process with pid 1612002 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1612002 00:24:14.031 Received shutdown signal, test time was about 2.000000 seconds 00:24:14.031 00:24:14.031 Latency(us) 00:24:14.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.031 =================================================================================================================== 00:24:14.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.031 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1612002 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1612470 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1612470 /var/tmp/bperf.sock 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1612470 ']' 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.289 16:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:14.289 [2024-07-15 16:40:53.881437] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:14.289 [2024-07-15 16:40:53.881536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612470 ] 00:24:14.547 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.547 [2024-07-15 16:40:53.945445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.547 [2024-07-15 16:40:54.061656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.547 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.547 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:14.547 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:14.547 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:14.547 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:15.114 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.114 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.371 nvme0n1 00:24:15.371 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:15.371 16:40:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.371 Running I/O for 2 seconds... 00:24:17.910 00:24:17.910 Latency(us) 00:24:17.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:17.910 nvme0n1 : 2.01 19441.98 75.95 0.00 0.00 6568.62 4150.61 10145.94 00:24:17.910 =================================================================================================================== 00:24:17.910 Total : 19441.98 75.95 0.00 0.00 6568.62 4150.61 10145.94 00:24:17.910 0 00:24:17.910 16:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:17.910 16:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:17.910 16:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:17.910 16:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:17.910 | select(.opcode=="crc32c") 00:24:17.910 | "\(.module_name) \(.executed)"' 00:24:17.910 16:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1612470 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1612470 ']' 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1612470 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612470 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612470' 00:24:17.910 killing process with pid 1612470 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1612470 00:24:17.910 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.910 00:24:17.910 Latency(us) 00:24:17.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.910 =================================================================================================================== 00:24:17.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1612470 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1612936 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1612936 /var/tmp/bperf.sock 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1612936 ']' 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.910 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.910 [2024-07-15 16:40:57.506186] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:17.910 [2024-07-15 16:40:57.506274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612936 ] 00:24:17.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.910 Zero copy mechanism will not be used. 00:24:18.168 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.168 [2024-07-15 16:40:57.567967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.168 [2024-07-15 16:40:57.681253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.168 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.168 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:18.168 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:18.168 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.168 16:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:18.733 16:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.733 16:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.990 nvme0n1 00:24:18.991 16:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:18.991 16:40:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.247 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.248 Zero copy mechanism will not be used. 00:24:19.248 Running I/O for 2 seconds... 00:24:21.203 00:24:21.203 Latency(us) 00:24:21.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:21.203 nvme0n1 : 2.01 1556.14 194.52 0.00 0.00 10251.34 7912.87 18835.53 00:24:21.203 =================================================================================================================== 00:24:21.203 Total : 1556.14 194.52 0.00 0.00 10251.34 7912.87 18835.53 00:24:21.203 0 00:24:21.203 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:21.203 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:21.203 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:21.203 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:21.203 | select(.opcode=="crc32c") 00:24:21.203 | "\(.module_name) \(.executed)"' 00:24:21.203 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1612936 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1612936 ']' 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1612936 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612936 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612936' 00:24:21.461 killing process with pid 1612936 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1612936 00:24:21.461 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.461 00:24:21.461 Latency(us) 00:24:21.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.461 =================================================================================================================== 00:24:21.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.461 16:41:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1612936 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1611573 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1611573 ']' 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1611573 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611573 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611573' 00:24:21.720 killing process with pid 1611573 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1611573 00:24:21.720 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1611573 00:24:21.978 00:24:21.978 real 0m15.599s 00:24:21.978 user 0m31.342s 00:24:21.978 sys 0m3.841s 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:21.978 ************************************ 00:24:21.978 END TEST nvmf_digest_clean 00:24:21.978 ************************************ 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.978 16:41:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:22.236 ************************************ 00:24:22.236 START TEST nvmf_digest_error 00:24:22.236 ************************************ 00:24:22.236 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:22.236 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:22.236 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1613375 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1613375 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1613375 ']' 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.237 16:41:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.237 [2024-07-15 16:41:01.641212] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:22.237 [2024-07-15 16:41:01.641310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.237 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.237 [2024-07-15 16:41:01.709293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.237 [2024-07-15 16:41:01.827642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.237 [2024-07-15 16:41:01.827704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.237 [2024-07-15 16:41:01.827730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.237 [2024-07-15 16:41:01.827742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.237 [2024-07-15 16:41:01.827754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.237 [2024-07-15 16:41:01.827784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 [2024-07-15 16:41:02.606263] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 null0 00:24:23.173 [2024-07-15 16:41:02.719402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.173 [2024-07-15 16:41:02.743614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1613527 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1613527 /var/tmp/bperf.sock 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1613527 ']' 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.173 16:41:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.431 [2024-07-15 16:41:02.790994] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:23.431 [2024-07-15 16:41:02.791075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613527 ] 00:24:23.431 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.431 [2024-07-15 16:41:02.852691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.431 [2024-07-15 16:41:02.978832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.688 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.688 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:23.688 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.688 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.945 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.202 nvme0n1 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:24.202 16:41:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.202 Running I/O for 2 seconds... 00:24:24.460 [2024-07-15 16:41:03.820883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.820942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.820971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.836542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.836589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.836607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.849476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.849507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.849524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.862923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.862954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.862987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.877934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.877997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.889545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.889589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.889605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.902995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.903023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.460 [2024-07-15 16:41:03.903054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.460 [2024-07-15 16:41:03.915558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.460 [2024-07-15 16:41:03.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.915620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.928840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.928869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.928908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.943409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.943478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.956366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.956394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.956424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.969206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.969239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.969258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.983243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.983305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:03.997760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:03.997807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:03.997824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:04.009508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:04.009542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:04.009561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:04.024931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:04.024975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:04.024990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:04.038741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:04.038774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:04.038794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.461 [2024-07-15 16:41:04.049832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.461 [2024-07-15 16:41:04.049866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.461 [2024-07-15 16:41:04.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.719 [2024-07-15 16:41:04.064265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.719 [2024-07-15 16:41:04.064300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.719 [2024-07-15 16:41:04.064318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.719 [2024-07-15 16:41:04.078319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.719 [2024-07-15 16:41:04.078349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.719 [2024-07-15 16:41:04.078365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.719 [2024-07-15 16:41:04.094084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.719 [2024-07-15 16:41:04.094117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.719 [2024-07-15 16:41:04.094134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.719 [2024-07-15 16:41:04.105590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.719 [2024-07-15 16:41:04.105625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.719 [2024-07-15 16:41:04.105644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.719 [2024-07-15 16:41:04.120438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.120473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.120504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.135299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.135329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.135360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.146606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.146640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.146659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.162064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.162109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.162125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.176001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.176031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.176057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.189706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.189735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.189766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.202487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.202515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.202546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.217823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.217852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.217893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.229059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.229086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.229117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.243971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.244004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.244023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.259722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.259774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.272500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.272543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.272559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.284389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.284416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.284447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.299282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.299316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.299349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.720 [2024-07-15 16:41:04.310941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.720 [2024-07-15 16:41:04.310974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.720 [2024-07-15 16:41:04.310992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.325071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.325101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.325118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.339478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.339506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.339538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.353460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.353489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.353520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.366794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.366824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.366840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.382209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.382239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.382255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.394082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.394110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.394141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.407534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.407563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.407594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.421690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.421723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.421741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.435934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.435963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.448474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.448504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.448520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.462142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.462172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.462189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.474220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.474247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.474261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.487702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.487732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.487749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.503501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.978 [2024-07-15 16:41:04.503532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.978 [2024-07-15 16:41:04.503548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.978 [2024-07-15 16:41:04.516393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.979 [2024-07-15 16:41:04.516422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.979 [2024-07-15 16:41:04.516455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.979 [2024-07-15 16:41:04.530572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.979 [2024-07-15 16:41:04.530621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.979 [2024-07-15 16:41:04.530639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.979 [2024-07-15 16:41:04.542841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.979 [2024-07-15 16:41:04.542890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.979 [2024-07-15 16:41:04.542908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.979 [2024-07-15 16:41:04.558151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.979 [2024-07-15 16:41:04.558194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.979 [2024-07-15 16:41:04.558210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.979 [2024-07-15 16:41:04.572881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:24.979 [2024-07-15 16:41:04.572926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.979 [2024-07-15 16:41:04.572942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.584755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.584807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.598739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.598768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.598799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.611545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.611592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.611609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.624657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.624685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.639235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.639264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.639296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.651377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.651411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.651429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.665537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.665580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.665600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.679903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.679948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.679964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.694415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.694458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.694474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.707706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.707735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.707767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.723487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.723530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.723546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.735635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.735662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.735692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.751388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.751421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.751439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.765113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.765143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.765169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.777035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.777087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.791561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.791603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.791620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.805318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.805377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.817328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.817356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.817386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.237 [2024-07-15 16:41:04.832133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.237 [2024-07-15 16:41:04.832163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.237 [2024-07-15 16:41:04.832180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.843947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.843976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.843992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.858902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.858950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.858967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.874815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.874867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.886271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.886318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.886338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.902348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.902382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.902402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.914028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.914056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.914086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.928347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.928375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.928407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.942951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.942979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.942996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.955006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.955033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.955064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.970681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.970714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.970733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.496 [2024-07-15 16:41:04.983063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.496 [2024-07-15 16:41:04.983093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.496 [2024-07-15 16:41:04.983110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:04.995139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:04.995167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:04.995199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.010794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.010828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.010847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.023817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.023847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.023863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.035979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.036006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.036037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.049652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.049712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.062122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.062192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.077464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.077498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.077517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.497 [2024-07-15 16:41:05.088655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.497 [2024-07-15 16:41:05.088689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.497 [2024-07-15 16:41:05.088707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.103039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.103069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.103102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.116236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.116267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.116297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.130215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.130247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.130264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.143476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.143507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.157437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.157483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.168784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.168818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.168837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.183940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.183985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.184002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.199361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.199389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.199420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.213070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.213101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.213118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.225483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.225517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.225536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.239961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.239992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.240009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.251664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.251692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.251723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.265674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.265708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.265726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.280126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.280164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.280179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.292828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.292856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.292893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.756 [2024-07-15 16:41:05.306349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.756 [2024-07-15 16:41:05.306378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.756 [2024-07-15 16:41:05.306409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.757 [2024-07-15 16:41:05.317847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.757 [2024-07-15 16:41:05.317889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-07-15 16:41:05.317910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.757 [2024-07-15 16:41:05.332382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.757 [2024-07-15 16:41:05.332411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-07-15 16:41:05.332442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.757 [2024-07-15 16:41:05.348187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:25.757 [2024-07-15 16:41:05.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.757 [2024-07-15 16:41:05.348258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.360830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.360859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.360898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.374856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.374895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.374913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.386432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.386463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.386481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.400963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.400992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.401025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.414287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.414317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.414349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.427607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.427637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.427654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.440411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.440439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.440470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.453280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.453308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.453339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.468369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.468409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.468428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.479844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.479899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.479919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.494886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.494913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.508798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.508832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.508850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.523785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.523814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.523846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.538365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.538394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.538425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.554118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.554148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.554165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.566653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.566697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.566714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.581811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.581840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.581857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.593849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.593890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.593910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.017 [2024-07-15 16:41:05.611596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.017 [2024-07-15 16:41:05.611625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.017 [2024-07-15 16:41:05.611655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.624016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.624045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.624077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.637531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.637560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.637591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.650833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.650863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.664554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.664582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.664614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.677731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.677776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.677793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.689763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.689791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.689806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.705293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.718477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.718507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.718523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.730325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.730353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.730385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.745200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.745228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.756470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.756503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.756522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.771277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.771306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.771321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.784499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.784528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.784545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.796684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.796712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 [2024-07-15 16:41:05.808858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ebd50) 00:24:26.275 [2024-07-15 16:41:05.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.275 [2024-07-15 16:41:05.808946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.275 00:24:26.275 Latency(us) 00:24:26.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.275 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:26.275 nvme0n1 : 2.01 18704.04 73.06 0.00 0.00 6832.83 3179.71 19515.16 00:24:26.275 =================================================================================================================== 00:24:26.275 Total : 18704.04 73.06 0.00 0.00 6832.83 3179.71 19515.16 00:24:26.275 0 00:24:26.276 16:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:26.276 16:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:26.276 16:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:26.276 | .driver_specific 00:24:26.276 | .nvme_error 00:24:26.276 | .status_code 00:24:26.276 | .command_transient_transport_error' 00:24:26.276 16:41:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1613527 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1613527 ']' 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1613527 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1613527 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1613527' 00:24:26.533 killing process with pid 1613527 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1613527 00:24:26.533 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.533 00:24:26.533 Latency(us) 00:24:26.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.533 =================================================================================================================== 00:24:26.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.533 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1613527 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1614027 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1614027 /var/tmp/bperf.sock 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1614027 ']' 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:26.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.791 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.049 [2024-07-15 16:41:06.404365] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:27.049 [2024-07-15 16:41:06.404441] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614027 ] 00:24:27.049 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:27.049 Zero copy mechanism will not be used. 00:24:27.049 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.049 [2024-07-15 16:41:06.464472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.049 [2024-07-15 16:41:06.573778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.306 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.306 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:27.306 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:27.306 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:27.564 16:41:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:27.821 nvme0n1 00:24:27.821 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:27.822 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.822 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.822 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.822 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:27.822 16:41:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.081 Zero copy mechanism will not be used. 00:24:28.081 Running I/O for 2 seconds... 00:24:28.081 [2024-07-15 16:41:07.501288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.501356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.501378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.512751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.512799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.512818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.523873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.523947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.534942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.534973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.535005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.545854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.545899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.545935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.556924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.556954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.556984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.567839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.567902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.578764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.578817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.589752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.589786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.589804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.600758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.600818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.611773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.611806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.611825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.622741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.622775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.622794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.633818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.633852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.633870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.644790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.644823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.644842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.655728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.655761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.655780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.666728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.666782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.081 [2024-07-15 16:41:07.677809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.081 [2024-07-15 16:41:07.677844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.081 [2024-07-15 16:41:07.677862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.688970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.689000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.689017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.699961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.699991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.700008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.710988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.711018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.711034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.722166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.722211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.722231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.733120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.733167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.733184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.341 [2024-07-15 16:41:07.744172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.341 [2024-07-15 16:41:07.744201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.341 [2024-07-15 16:41:07.744232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.755251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.755302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.766377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.766412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.766431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.777346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.777381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.788381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.788415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.788439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.799492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.810591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.810626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.810644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.821694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.821729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.821748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.832288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.832319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.832351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.842719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.842763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.842781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.852838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.852868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.852909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.863193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.863223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.863240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.873679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.873709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.873742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.884043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.884094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.884111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.894170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.894215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.894231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.904279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.904323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.914283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.914327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.914344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.924524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.924553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.924585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.342 [2024-07-15 16:41:07.934733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.342 [2024-07-15 16:41:07.934762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.342 [2024-07-15 16:41:07.934793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.944873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.944911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.944928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.955218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.955279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.965448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.965478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.965510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.975693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.975723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.975755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.985901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.985945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.985962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:07.995902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:07.995946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:07.995963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.006181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.006225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.006241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.016370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.026548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.026578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.026611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.036745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.036775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.036807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.047116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.047146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.047179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.057401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.057446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.057471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.067624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.067668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.067685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.077867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.077921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.077938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.088008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.088038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.098080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.098112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.098129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.108335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.108379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.108396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.118400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.118429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.118462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.128484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.128512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.128544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.138531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.138593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.148528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.148558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.148591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.158796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.158826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.168965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.168994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.169011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.602 [2024-07-15 16:41:08.179000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.602 [2024-07-15 16:41:08.179045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.602 [2024-07-15 16:41:08.179062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.603 [2024-07-15 16:41:08.189415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.603 [2024-07-15 16:41:08.189444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.603 [2024-07-15 16:41:08.189476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.199500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.199531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.199564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.209514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.209543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.209576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.219525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.219571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.219588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.229734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.229764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.229787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.239906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.239936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.239953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.249946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.249976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.249993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.259903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.259933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.259950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.269994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.270044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.279864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.279916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.279934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.289892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.289936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.289953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.300065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.300109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.300126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.310351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.310381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.320710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.320759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.320777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.330943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.330987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.331003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.341181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.341242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.351335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.351365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.351396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.361800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.361829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.361861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.371854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.371904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.371923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.381893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.863 [2024-07-15 16:41:08.381937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.863 [2024-07-15 16:41:08.381955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.863 [2024-07-15 16:41:08.392104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.392150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.402170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.402215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.402231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.412352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.412395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.412412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.422717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.422746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.422762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.433063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.443498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.443527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.443560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.864 [2024-07-15 16:41:08.453627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:28.864 [2024-07-15 16:41:08.453656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.864 [2024-07-15 16:41:08.453689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.463843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.463897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.463915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.474003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.474034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.474051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.484151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.484195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.484211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.494335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.494364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.494405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.504410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.504454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.504470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.514551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.514582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.514599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.524717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.524763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.524780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.535005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.535052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.535069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.545108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.545165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.545183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.555326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.555399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.565490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.565520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.125 [2024-07-15 16:41:08.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.125 [2024-07-15 16:41:08.575568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.125 [2024-07-15 16:41:08.575609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.575642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.585644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.585674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.585706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.595815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.595847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.595864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.605834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.605863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.605909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.615949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.615996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.616013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.626311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.626342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.626374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.636560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.646647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.646710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.656929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.656974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.656992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.666974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.667005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.667033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.677096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.677127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.687106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.687136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.697115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.697146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.697163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.707191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.707221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.707252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.126 [2024-07-15 16:41:08.717477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.126 [2024-07-15 16:41:08.717507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.126 [2024-07-15 16:41:08.717540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.727450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.727480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.737649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.737693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.737710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.748012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.748043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.758415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.758466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.758483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.768426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.768488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.778661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.778691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.778708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.789146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.789191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.789207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.800195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.800243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.800261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.811191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.811237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.811256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.822205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.822252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.833240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.833283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.833300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.844320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.844354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.844373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.855277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.855329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.866260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.866293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.866311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.877291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.877325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.877344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.888290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.888324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.888343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.899308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.899341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.899359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.910277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.910310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.910328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.921201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.921248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.387 [2024-07-15 16:41:08.921267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.387 [2024-07-15 16:41:08.932165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.387 [2024-07-15 16:41:08.932211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.388 [2024-07-15 16:41:08.932230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.388 [2024-07-15 16:41:08.943184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.388 [2024-07-15 16:41:08.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.388 [2024-07-15 16:41:08.943257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.388 [2024-07-15 16:41:08.954176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.388 [2024-07-15 16:41:08.954223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.388 [2024-07-15 16:41:08.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.388 [2024-07-15 16:41:08.965122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.388 [2024-07-15 16:41:08.965167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.388 [2024-07-15 16:41:08.965184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.388 [2024-07-15 16:41:08.976358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.388 [2024-07-15 16:41:08.976392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.388 [2024-07-15 16:41:08.976411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:08.987373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:08.987408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:08.987426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:08.998360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:08.998394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:08.998413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:09.009424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:09.009457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:09.009476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:09.020449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:09.020483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:09.020502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:09.031580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:09.031614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:09.031633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:09.042553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:09.042587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.648 [2024-07-15 16:41:09.042606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.648 [2024-07-15 16:41:09.053535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.648 [2024-07-15 16:41:09.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.053586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.064524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.064558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.064577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.075678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.075711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.075730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.086645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.086679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.086697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.097677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.097710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.097729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.108961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.109023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.119978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.120007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.120043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.131001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.131030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.131067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.142029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.142072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.142088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.153159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.153203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.153219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.164493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.164525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.164542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.175567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.175601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.175619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.186853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.186896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.186931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.197805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.197838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.197856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.208963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.208992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.209024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.219846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.219889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.219910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.230850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.230915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.230934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.649 [2024-07-15 16:41:09.241947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.649 [2024-07-15 16:41:09.241976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.649 [2024-07-15 16:41:09.242009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.252940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.252970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.253002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.263874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.263927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.263944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.274803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.274836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.274854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.285844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.285886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.285923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.296862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.296921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.296939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.307840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.307873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.307901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.318840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.318874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.318903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.329775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.329808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.329826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.340774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.340808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.340827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.351781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.351814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.351832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.362770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.362804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.362823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.373730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.373764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.373783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.384778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.384811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.384830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.395838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.395872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.395902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.909 [2024-07-15 16:41:09.406782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.909 [2024-07-15 16:41:09.406815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.909 [2024-07-15 16:41:09.406833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.417797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.417837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.417856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.428939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.428969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.439917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.439947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.439979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.450935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.450979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.450995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.462203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.462248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.462268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.473416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.473449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.473467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.910 [2024-07-15 16:41:09.484357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25264f0) 00:24:29.910 [2024-07-15 16:41:09.484391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.910 [2024-07-15 16:41:09.484410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.910 00:24:29.910 Latency(us) 00:24:29.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.910 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:29.910 nvme0n1 : 2.00 2917.77 364.72 0.00 0.00 5478.86 4878.79 13786.83 00:24:29.910 =================================================================================================================== 00:24:29.910 Total : 2917.77 364.72 0.00 0.00 5478.86 4878.79 13786.83 00:24:29.910 0 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:30.168 | .driver_specific 00:24:30.168 | .nvme_error 00:24:30.168 | .status_code 00:24:30.168 | .command_transient_transport_error' 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 188 > 0 )) 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1614027 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1614027 ']' 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1614027 00:24:30.168 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614027 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614027' 00:24:30.427 killing process with pid 1614027 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1614027 00:24:30.427 Received shutdown signal, test time was about 2.000000 seconds 00:24:30.427 00:24:30.427 Latency(us) 00:24:30.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.427 =================================================================================================================== 00:24:30.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.427 16:41:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1614027 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1614463 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1614463 /var/tmp/bperf.sock 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1614463 ']' 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:30.685 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:30.686 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.686 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:30.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:30.686 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.686 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.686 [2024-07-15 16:41:10.103042] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:30.686 [2024-07-15 16:41:10.103126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614463 ] 00:24:30.686 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.686 [2024-07-15 16:41:10.161262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.686 [2024-07-15 16:41:10.272022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.943 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.943 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:30.943 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.943 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.201 16:41:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.460 nvme0n1 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:31.720 16:41:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:31.720 Running I/O for 2 seconds... 00:24:31.720 [2024-07-15 16:41:11.192858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8a50 00:24:31.720 [2024-07-15 16:41:11.194208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.194251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.206349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fda78 00:24:31.720 [2024-07-15 16:41:11.207829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.207862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.219760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6300 00:24:31.720 [2024-07-15 16:41:11.221389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.221423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.231721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f20d8 00:24:31.720 [2024-07-15 16:41:11.232860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.232899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.244309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ee190 00:24:31.720 [2024-07-15 16:41:11.245421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.245453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.257073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed0b0 00:24:31.720 [2024-07-15 16:41:11.258182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.258212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.270186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fb8b8 00:24:31.720 [2024-07-15 16:41:11.271478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.271511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.283168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fc998 00:24:31.720 [2024-07-15 16:41:11.284460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.284493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.296268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed920 00:24:31.720 [2024-07-15 16:41:11.297697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.297729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.720 [2024-07-15 16:41:11.309466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fd208 00:24:31.720 [2024-07-15 16:41:11.311098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.720 [2024-07-15 16:41:11.311127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.319948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5378 00:24:31.981 [2024-07-15 16:41:11.320899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.320955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.332689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8a50 00:24:31.981 [2024-07-15 16:41:11.333659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.333691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.345413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6738 00:24:31.981 [2024-07-15 16:41:11.346359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.346391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.358560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f0bc0 00:24:31.981 [2024-07-15 16:41:11.359320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.359353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.371943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ee190 00:24:31.981 [2024-07-15 16:41:11.372858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.372900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.385177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6300 00:24:31.981 [2024-07-15 16:41:11.386292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.386324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.399765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4b08 00:24:31.981 [2024-07-15 16:41:11.401900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.401944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.408767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e0a68 00:24:31.981 [2024-07-15 16:41:11.409730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.409761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.421657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5be8 00:24:31.981 [2024-07-15 16:41:11.422616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.422648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.434383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ea248 00:24:31.981 [2024-07-15 16:41:11.435325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.435356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.447026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ef6a8 00:24:31.981 [2024-07-15 16:41:11.448002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.448035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.459753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f0788 00:24:31.981 [2024-07-15 16:41:11.460721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.460754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.472505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f1868 00:24:31.981 [2024-07-15 16:41:11.473484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.473516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.485231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e9168 00:24:31.981 [2024-07-15 16:41:11.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.486278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.497863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fac10 00:24:31.981 [2024-07-15 16:41:11.498796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.498828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.510567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f9b30 00:24:31.981 [2024-07-15 16:41:11.511520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.511552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.523167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6b70 00:24:31.981 [2024-07-15 16:41:11.524131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.535841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e7c50 00:24:31.981 [2024-07-15 16:41:11.536817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.536848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.548575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190de470 00:24:31.981 [2024-07-15 16:41:11.549520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.549551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.561273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fef90 00:24:31.981 [2024-07-15 16:41:11.562205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.981 [2024-07-15 16:41:11.573974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fdeb0 00:24:31.981 [2024-07-15 16:41:11.574967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.981 [2024-07-15 16:41:11.574995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.587077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8a50 00:24:32.242 [2024-07-15 16:41:11.587896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.587940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.599821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed920 00:24:32.242 [2024-07-15 16:41:11.600960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.600989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.612546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e1f80 00:24:32.242 [2024-07-15 16:41:11.613639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.613671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.625175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e0ea0 00:24:32.242 [2024-07-15 16:41:11.626240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.626271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.637805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f31b8 00:24:32.242 [2024-07-15 16:41:11.638936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.638965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.650533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190eee38 00:24:32.242 [2024-07-15 16:41:11.651646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.651678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.663236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190df550 00:24:32.242 [2024-07-15 16:41:11.664322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.664354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.676215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fcdd0 00:24:32.242 [2024-07-15 16:41:11.677298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.677329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.690450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fa3a0 00:24:32.242 [2024-07-15 16:41:11.692278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.692310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.702422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e5658 00:24:32.242 [2024-07-15 16:41:11.703711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.703742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.715046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed0b0 00:24:32.242 [2024-07-15 16:41:11.716355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.716387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.727576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e1710 00:24:32.242 [2024-07-15 16:41:11.728874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.728928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.740259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190eea00 00:24:32.242 [2024-07-15 16:41:11.741516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.741547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.753007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f3a28 00:24:32.242 [2024-07-15 16:41:11.754270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.754302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.766059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4298 00:24:32.242 [2024-07-15 16:41:11.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.767187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.778821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190eaef0 00:24:32.242 [2024-07-15 16:41:11.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.780287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.791893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e49b0 00:24:32.242 [2024-07-15 16:41:11.793437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.793465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.801697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e01f8 00:24:32.242 [2024-07-15 16:41:11.802590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.802619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.813698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5378 00:24:32.242 [2024-07-15 16:41:11.814645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.814674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.825754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f1430 00:24:32.242 [2024-07-15 16:41:11.826685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.826713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.242 [2024-07-15 16:41:11.837915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e95a0 00:24:32.242 [2024-07-15 16:41:11.838798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.242 [2024-07-15 16:41:11.838827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.849964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ddc00 00:24:32.502 [2024-07-15 16:41:11.850804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.850833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.861904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fb8b8 00:24:32.502 [2024-07-15 16:41:11.862761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.862789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.873899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fd208 00:24:32.502 [2024-07-15 16:41:11.874716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.874746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.885790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f9f68 00:24:32.502 [2024-07-15 16:41:11.886663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.886692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.897791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fe2e8 00:24:32.502 [2024-07-15 16:41:11.898689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.898717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.909617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4578 00:24:32.502 [2024-07-15 16:41:11.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.910601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.921549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e5ec8 00:24:32.502 [2024-07-15 16:41:11.922400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.922431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.932521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190eb760 00:24:32.502 [2024-07-15 16:41:11.933451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.933480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.944850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ee5c8 00:24:32.502 [2024-07-15 16:41:11.945939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.945968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.957175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190edd58 00:24:32.502 [2024-07-15 16:41:11.958435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.958463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.969463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ee190 00:24:32.502 [2024-07-15 16:41:11.970808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.970839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.980407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e7818 00:24:32.502 [2024-07-15 16:41:11.981306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.981346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:11.992025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e88f8 00:24:32.502 [2024-07-15 16:41:11.992934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:11.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:32.502 [2024-07-15 16:41:12.003937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e3498 00:24:32.502 [2024-07-15 16:41:12.004836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.502 [2024-07-15 16:41:12.004867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.015973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4b08 00:24:32.503 [2024-07-15 16:41:12.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.016836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.027006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4140 00:24:32.503 [2024-07-15 16:41:12.027897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.027936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.039328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed4e8 00:24:32.503 [2024-07-15 16:41:12.040361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.040390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.052462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190de8a8 00:24:32.503 [2024-07-15 16:41:12.053578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.053607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.064389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f9b30 00:24:32.503 [2024-07-15 16:41:12.065350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.065380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.076353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e01f8 00:24:32.503 [2024-07-15 16:41:12.077664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.077693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.503 [2024-07-15 16:41:12.088352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5378 00:24:32.503 [2024-07-15 16:41:12.089633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.503 [2024-07-15 16:41:12.089669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.100305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f1430 00:24:32.764 [2024-07-15 16:41:12.101669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.101699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.112192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f0bc0 00:24:32.764 [2024-07-15 16:41:12.113565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.113594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.124039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190efae0 00:24:32.764 [2024-07-15 16:41:12.125352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.125381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.135797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f6890 00:24:32.764 [2024-07-15 16:41:12.137069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.137098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.147578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fac10 00:24:32.764 [2024-07-15 16:41:12.148951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.148981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.159440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5be8 00:24:32.764 [2024-07-15 16:41:12.160815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.160844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.171394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f7da8 00:24:32.764 [2024-07-15 16:41:12.172764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.172794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.183223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e7818 00:24:32.764 [2024-07-15 16:41:12.184538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.184566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.195088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e88f8 00:24:32.764 [2024-07-15 16:41:12.196408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.196436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.206769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fef90 00:24:32.764 [2024-07-15 16:41:12.208032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.208060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.218624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e1f80 00:24:32.764 [2024-07-15 16:41:12.219961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.232016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4de8 00:24:32.764 [2024-07-15 16:41:12.233885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.233926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.240307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f0bc0 00:24:32.764 [2024-07-15 16:41:12.241123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.241152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.252361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190efae0 00:24:32.764 [2024-07-15 16:41:12.253241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.253270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.264165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190feb58 00:24:32.764 [2024-07-15 16:41:12.265017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.265046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.276027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190de470 00:24:32.764 [2024-07-15 16:41:12.276968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.276997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.287772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fc560 00:24:32.764 [2024-07-15 16:41:12.288615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.288643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.299620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f20d8 00:24:32.764 [2024-07-15 16:41:12.300451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.300478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.311489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ee190 00:24:32.764 [2024-07-15 16:41:12.312386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.322482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8e88 00:24:32.764 [2024-07-15 16:41:12.323292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.323321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.335688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e3d08 00:24:32.764 [2024-07-15 16:41:12.336740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.764 [2024-07-15 16:41:12.336769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:32.764 [2024-07-15 16:41:12.347562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4298 00:24:32.764 [2024-07-15 16:41:12.348553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.765 [2024-07-15 16:41:12.348581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:32.765 [2024-07-15 16:41:12.359459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e12d8 00:24:32.765 [2024-07-15 16:41:12.360559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.765 [2024-07-15 16:41:12.360588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.371458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f35f0 00:24:33.025 [2024-07-15 16:41:12.372546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.372575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.383500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ef270 00:24:33.025 [2024-07-15 16:41:12.384496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.384525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.395458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f81e0 00:24:33.025 [2024-07-15 16:41:12.396494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.396528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.407280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e73e0 00:24:33.025 [2024-07-15 16:41:12.408343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.419078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e84c0 00:24:33.025 [2024-07-15 16:41:12.420052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.420081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.430855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ff3c8 00:24:33.025 [2024-07-15 16:41:12.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.431939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.442695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4578 00:24:33.025 [2024-07-15 16:41:12.443759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.443789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.454680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8a50 00:24:33.025 [2024-07-15 16:41:12.455761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.455789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.466582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4de8 00:24:33.025 [2024-07-15 16:41:12.467643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.467678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.478314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e1f80 00:24:33.025 [2024-07-15 16:41:12.479368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.490157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f1868 00:24:33.025 [2024-07-15 16:41:12.491157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.491186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.025 [2024-07-15 16:41:12.502455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fc998 00:24:33.025 [2024-07-15 16:41:12.503600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.025 [2024-07-15 16:41:12.503629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.514422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ea680 00:24:33.026 [2024-07-15 16:41:12.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.515701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.526322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e27f0 00:24:33.026 [2024-07-15 16:41:12.527527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.527567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.538286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fb480 00:24:33.026 [2024-07-15 16:41:12.539413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.539442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.550126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f96f8 00:24:33.026 [2024-07-15 16:41:12.551270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.561123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e4140 00:24:33.026 [2024-07-15 16:41:12.562235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.562263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.574184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ea248 00:24:33.026 [2024-07-15 16:41:12.575529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.575557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.586147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4b08 00:24:33.026 [2024-07-15 16:41:12.587428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.587456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.597994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fbcf0 00:24:33.026 [2024-07-15 16:41:12.599335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.599363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.609732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fdeb0 00:24:33.026 [2024-07-15 16:41:12.611074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.026 [2024-07-15 16:41:12.611102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.026 [2024-07-15 16:41:12.621637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e8d30 00:24:33.289 [2024-07-15 16:41:12.622995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.633531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e01f8 00:24:33.289 [2024-07-15 16:41:12.634788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.634817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.645351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5378 00:24:33.289 [2024-07-15 16:41:12.646638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.646665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.657226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f1430 00:24:33.289 [2024-07-15 16:41:12.658575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.658604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.669097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ed4e8 00:24:33.289 [2024-07-15 16:41:12.670477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.670506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.681247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f2d80 00:24:33.289 [2024-07-15 16:41:12.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.682655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.693214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f7100 00:24:33.289 [2024-07-15 16:41:12.694502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.705090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ec840 00:24:33.289 [2024-07-15 16:41:12.706425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.706460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.716887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f6890 00:24:33.289 [2024-07-15 16:41:12.718190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.718219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.728735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e5220 00:24:33.289 [2024-07-15 16:41:12.730005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.730035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.739656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ff3c8 00:24:33.289 [2024-07-15 16:41:12.740957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.289 [2024-07-15 16:41:12.740984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:33.289 [2024-07-15 16:41:12.750542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f81e0 00:24:33.289 [2024-07-15 16:41:12.751473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.751503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.761291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f20d8 00:24:33.290 [2024-07-15 16:41:12.762088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.762116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.773425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fd640 00:24:33.290 [2024-07-15 16:41:12.774392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.774421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.787519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e23b8 00:24:33.290 [2024-07-15 16:41:12.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.788807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.800337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e99d8 00:24:33.290 [2024-07-15 16:41:12.801565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.801597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.813004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4298 00:24:33.290 [2024-07-15 16:41:12.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.814299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.825667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e3d08 00:24:33.290 [2024-07-15 16:41:12.826935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.826963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.838464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fa3a0 00:24:33.290 [2024-07-15 16:41:12.839666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.839698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.851137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fe720 00:24:33.290 [2024-07-15 16:41:12.852358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.852389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.863735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e5a90 00:24:33.290 [2024-07-15 16:41:12.864994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.865022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.290 [2024-07-15 16:41:12.876507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190df988 00:24:33.290 [2024-07-15 16:41:12.877760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.290 [2024-07-15 16:41:12.877791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.889230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f20d8 00:24:33.598 [2024-07-15 16:41:12.890487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.890520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.902296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e27f0 00:24:33.598 [2024-07-15 16:41:12.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.915171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e3498 00:24:33.598 [2024-07-15 16:41:12.916548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.916579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.927857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e7c50 00:24:33.598 [2024-07-15 16:41:12.929247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.929279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.940433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6b70 00:24:33.598 [2024-07-15 16:41:12.941814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.941846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.953148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fb8b8 00:24:33.598 [2024-07-15 16:41:12.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.954557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.964676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fe2e8 00:24:33.598 [2024-07-15 16:41:12.966778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.966809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.975683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190ef270 00:24:33.598 [2024-07-15 16:41:12.976563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.976593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:12.989861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f2948 00:24:33.598 [2024-07-15 16:41:12.990949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:12.990978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.002634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f6cc8 00:24:33.598 [2024-07-15 16:41:13.003685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.003716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.015328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e01f8 00:24:33.598 [2024-07-15 16:41:13.016373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.016404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.027951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e9e10 00:24:33.598 [2024-07-15 16:41:13.029040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.029068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.040999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e2c28 00:24:33.598 [2024-07-15 16:41:13.042262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.042293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.053885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f8e88 00:24:33.598 [2024-07-15 16:41:13.055208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.055254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.066591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190dece0 00:24:33.598 [2024-07-15 16:41:13.067834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.079690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e6b70 00:24:33.598 [2024-07-15 16:41:13.081182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.081228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.092621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e7c50 00:24:33.598 [2024-07-15 16:41:13.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.094123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.105483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e3498 00:24:33.598 [2024-07-15 16:41:13.106892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.106941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.118237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190eb760 00:24:33.598 [2024-07-15 16:41:13.119600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.119632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.130999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f4298 00:24:33.598 [2024-07-15 16:41:13.132385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.132416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.143669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fc128 00:24:33.598 [2024-07-15 16:41:13.145169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.145203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.598 [2024-07-15 16:41:13.156438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190fda78 00:24:33.598 [2024-07-15 16:41:13.157829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.598 [2024-07-15 16:41:13.157861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.856 [2024-07-15 16:41:13.169158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190e9168 00:24:33.856 [2024-07-15 16:41:13.170580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.856 [2024-07-15 16:41:13.170612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.856 [2024-07-15 16:41:13.182099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192a6b0) with pdu=0x2000190f5378 00:24:33.856 [2024-07-15 16:41:13.183527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.856 [2024-07-15 16:41:13.183559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:33.856 00:24:33.856 Latency(us) 00:24:33.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.856 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.856 nvme0n1 : 2.01 20735.51 81.00 0.00 0.00 6161.80 2463.67 16408.27 00:24:33.856 =================================================================================================================== 00:24:33.856 Total : 20735.51 81.00 0.00 0.00 6161.80 2463.67 16408.27 00:24:33.856 0 00:24:33.856 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:33.856 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:33.856 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:33.856 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:33.856 | .driver_specific 00:24:33.856 | .nvme_error 00:24:33.856 | .status_code 00:24:33.856 | .command_transient_transport_error' 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1614463 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1614463 ']' 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1614463 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614463 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.114 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614463' 00:24:34.115 killing process with pid 1614463 00:24:34.115 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1614463 00:24:34.115 Received shutdown signal, test time was about 2.000000 seconds 00:24:34.115 00:24:34.115 Latency(us) 00:24:34.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.115 =================================================================================================================== 00:24:34.115 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.115 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1614463 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1614871 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1614871 /var/tmp/bperf.sock 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1614871 ']' 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.372 16:41:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.372 [2024-07-15 16:41:13.826786] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:34.372 [2024-07-15 16:41:13.826899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614871 ] 00:24:34.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:34.372 Zero copy mechanism will not be used. 00:24:34.372 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.372 [2024-07-15 16:41:13.884538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.629 [2024-07-15 16:41:13.995872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.629 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.629 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:34.629 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.629 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.886 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.453 nvme0n1 00:24:35.453 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:35.453 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.453 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.453 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.453 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:35.454 16:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.454 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:35.454 Zero copy mechanism will not be used. 00:24:35.454 Running I/O for 2 seconds... 00:24:35.454 [2024-07-15 16:41:14.929260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:14.929706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:14.929749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:14.946747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:14.947155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:14.947203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:14.963991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:14.964418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:14.964450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:14.981530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:14.982048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:14.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:15.000983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:15.001396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:15.001429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:15.021796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:15.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:15.022245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.454 [2024-07-15 16:41:15.039727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.454 [2024-07-15 16:41:15.040122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.454 [2024-07-15 16:41:15.040164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.056728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.057146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.057188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.074066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.074459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.074485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.091108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.091493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.091534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.107225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.107634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.107661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.124439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.124812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.124839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.141898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.142309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.142336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.158415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.158779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.158806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.174844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.175286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.175319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.713 [2024-07-15 16:41:15.192215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.713 [2024-07-15 16:41:15.192675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.713 [2024-07-15 16:41:15.192703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.209516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.209931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.209976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.227551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.227936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.227977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.244741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.245136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.245181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.262725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.263136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.263178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.279652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.279930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.279960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.714 [2024-07-15 16:41:15.294743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.714 [2024-07-15 16:41:15.295051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.714 [2024-07-15 16:41:15.295093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.311693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.312150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.312193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.329071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.329469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.329512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.345203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.345604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.345631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.362974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.363416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.379393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.379773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.379815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.396659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.397089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.397134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.413892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.414375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.414414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.431184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.431587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.449394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.449807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.449850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.466804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.973 [2024-07-15 16:41:15.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.973 [2024-07-15 16:41:15.467256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.973 [2024-07-15 16:41:15.483049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.483443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.483483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.974 [2024-07-15 16:41:15.499751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.500157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.500185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:35.974 [2024-07-15 16:41:15.516044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.516441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.516466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:35.974 [2024-07-15 16:41:15.531845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.532292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.974 [2024-07-15 16:41:15.548137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.548571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.548597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:35.974 [2024-07-15 16:41:15.564612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:35.974 [2024-07-15 16:41:15.565086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.974 [2024-07-15 16:41:15.565114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.584265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.584698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.584724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.603681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.604089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.604131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.621255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.621640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.621672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.639390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.639883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.639910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.655423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.655763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.673040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.673308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.673350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.689249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.689632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.689672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.707797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.708207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.708250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.723214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.723634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.723677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.742019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.742432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.742472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.758363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.758767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.758806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.775439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.775815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.775859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.793685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.794108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.794136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.231 [2024-07-15 16:41:15.812434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.231 [2024-07-15 16:41:15.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.231 [2024-07-15 16:41:15.812847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.829437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.829861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.845780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.846208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.846235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.861905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.862332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.862374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.879947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.880400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.880441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.895899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.896291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.896331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.913865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.914325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.933221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.933594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.933635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.949731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.950150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.950177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.968129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.968538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.968567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:15.985348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:15.985730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:15.985777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:16.001034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:16.001429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:16.001456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:16.021678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:16.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:16.022143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:16.038526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:16.038948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:16.038975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:16.056085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:16.056475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:16.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.488 [2024-07-15 16:41:16.072316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.488 [2024-07-15 16:41:16.072693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.488 [2024-07-15 16:41:16.072734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.090829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.091115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.091143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.108308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.108841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.125166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.125551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.125578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.141407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.141803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.157028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.157465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.157489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.174859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.175330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.191122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.191507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.191548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.208221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.208629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.208672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.225985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.226353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.226396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.242644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.243123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.243150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.260632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.261059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.261100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.276908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.277315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.277341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.294254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.294613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.294640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.312135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.312430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.312455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.327095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:36.746 [2024-07-15 16:41:16.327488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.746 [2024-07-15 16:41:16.327527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.746 [2024-07-15 16:41:16.343309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.343719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.343769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.360353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.360755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.360786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.376848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.377318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.392350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.392742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.392782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.409426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.409792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.427264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.427635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.427661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.444960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.445349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.445375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.461868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.462329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.462354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.479234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.479650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.479677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.496805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.497202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.497243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.513752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.514194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.514235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.531074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.531468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.548421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.548803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.548844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.566059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.566460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.566487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.583319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.583766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.005 [2024-07-15 16:41:16.600083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.005 [2024-07-15 16:41:16.600506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.005 [2024-07-15 16:41:16.600535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.616083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.616504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.616545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.633323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.651087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.651521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.651547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.669184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.669568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.669594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.686294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.686714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.686755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.702617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.703010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.703051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.719944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.720348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.720388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.736083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.736490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.736518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.752056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.752258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.768077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.768461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.768488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.785607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.786127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.786154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.803207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.803593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.819143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.819578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.819617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.835011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.835416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.264 [2024-07-15 16:41:16.850466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.264 [2024-07-15 16:41:16.850860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.264 [2024-07-15 16:41:16.850910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.522 [2024-07-15 16:41:16.867162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.522 [2024-07-15 16:41:16.867507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.522 [2024-07-15 16:41:16.867533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.522 [2024-07-15 16:41:16.881919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.522 [2024-07-15 16:41:16.882310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.522 [2024-07-15 16:41:16.882337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.522 [2024-07-15 16:41:16.898934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.522 [2024-07-15 16:41:16.899365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.522 [2024-07-15 16:41:16.899392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.522 [2024-07-15 16:41:16.917043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x175faf0) with pdu=0x2000190fef90 00:24:37.522 [2024-07-15 16:41:16.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.522 [2024-07-15 16:41:16.917388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.522 00:24:37.522 Latency(us) 00:24:37.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:37.522 nvme0n1 : 2.01 1811.51 226.44 0.00 0.00 8810.65 2949.12 21359.88 00:24:37.522 =================================================================================================================== 00:24:37.522 Total : 1811.51 226.44 0.00 0.00 8810.65 2949.12 21359.88 00:24:37.522 0 00:24:37.522 16:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:37.522 16:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:37.522 16:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:37.522 | .driver_specific 00:24:37.522 | .nvme_error 00:24:37.522 | .status_code 00:24:37.522 | .command_transient_transport_error' 00:24:37.522 16:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1614871 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1614871 ']' 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1614871 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614871 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614871' 00:24:37.782 killing process with pid 1614871 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1614871 00:24:37.782 Received shutdown signal, test time was about 2.000000 seconds 00:24:37.782 00:24:37.782 Latency(us) 00:24:37.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.782 =================================================================================================================== 00:24:37.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.782 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1614871 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1613375 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1613375 ']' 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1613375 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1613375 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1613375' 00:24:38.042 killing process with pid 1613375 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1613375 00:24:38.042 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1613375 00:24:38.300 00:24:38.300 real 0m16.187s 00:24:38.300 user 0m31.987s 00:24:38.300 sys 0m3.953s 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.300 ************************************ 00:24:38.300 END TEST nvmf_digest_error 00:24:38.300 ************************************ 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.300 rmmod nvme_tcp 00:24:38.300 rmmod nvme_fabrics 00:24:38.300 rmmod nvme_keyring 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1613375 ']' 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1613375 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1613375 ']' 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1613375 00:24:38.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1613375) - No such process 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1613375 is not found' 00:24:38.300 Process with pid 1613375 is not found 00:24:38.300 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.301 16:41:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.838 16:41:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.838 00:24:40.838 real 0m36.134s 00:24:40.838 user 1m4.137s 00:24:40.838 sys 0m9.293s 00:24:40.838 16:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.838 16:41:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:40.838 ************************************ 00:24:40.838 END TEST nvmf_digest 00:24:40.838 ************************************ 00:24:40.838 16:41:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:40.838 16:41:19 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:40.838 16:41:19 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:40.838 16:41:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:40.838 16:41:19 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:40.838 16:41:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.838 16:41:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.838 16:41:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.838 ************************************ 00:24:40.838 START TEST nvmf_bdevperf 00:24:40.838 ************************************ 00:24:40.838 16:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:40.838 * Looking for test storage... 00:24:40.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.838 16:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.744 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:24:42.745 00:24:42.745 --- 10.0.0.2 ping statistics --- 00:24:42.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.745 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:42.745 00:24:42.745 --- 10.0.0.1 ping statistics --- 00:24:42.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.745 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:42.745 16:41:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1617225 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1617225 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1617225 ']' 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.745 16:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.745 [2024-07-15 16:41:22.076154] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:42.745 [2024-07-15 16:41:22.076246] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.745 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.745 [2024-07-15 16:41:22.149203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:42.745 [2024-07-15 16:41:22.266729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.745 [2024-07-15 16:41:22.266787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.745 [2024-07-15 16:41:22.266803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.745 [2024-07-15 16:41:22.266817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.745 [2024-07-15 16:41:22.266829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.745 [2024-07-15 16:41:22.267017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.745 [2024-07-15 16:41:22.267045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.745 [2024-07-15 16:41:22.267049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.680 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.681 [2024-07-15 16:41:23.082634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.681 Malloc0 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.681 [2024-07-15 16:41:23.143140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.681 { 00:24:43.681 "params": { 00:24:43.681 "name": "Nvme$subsystem", 00:24:43.681 "trtype": "$TEST_TRANSPORT", 00:24:43.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.681 "adrfam": "ipv4", 00:24:43.681 "trsvcid": "$NVMF_PORT", 00:24:43.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.681 "hdgst": ${hdgst:-false}, 00:24:43.681 "ddgst": ${ddgst:-false} 00:24:43.681 }, 00:24:43.681 "method": "bdev_nvme_attach_controller" 00:24:43.681 } 00:24:43.681 EOF 00:24:43.681 )") 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:43.681 16:41:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:43.681 "params": { 00:24:43.681 "name": "Nvme1", 00:24:43.681 "trtype": "tcp", 00:24:43.681 "traddr": "10.0.0.2", 00:24:43.681 "adrfam": "ipv4", 00:24:43.681 "trsvcid": "4420", 00:24:43.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.681 "hdgst": false, 00:24:43.681 "ddgst": false 00:24:43.681 }, 00:24:43.681 "method": "bdev_nvme_attach_controller" 00:24:43.681 }' 00:24:43.681 [2024-07-15 16:41:23.192839] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:43.681 [2024-07-15 16:41:23.192940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617376 ] 00:24:43.681 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.681 [2024-07-15 16:41:23.251324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.939 [2024-07-15 16:41:23.365070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.196 Running I/O for 1 seconds... 00:24:45.131 00:24:45.131 Latency(us) 00:24:45.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.131 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:45.131 Verification LBA range: start 0x0 length 0x4000 00:24:45.131 Nvme1n1 : 1.01 7024.45 27.44 0.00 0.00 18122.66 3422.44 20680.25 00:24:45.131 =================================================================================================================== 00:24:45.131 Total : 7024.45 27.44 0.00 0.00 18122.66 3422.44 20680.25 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1617641 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:45.699 { 00:24:45.699 "params": { 00:24:45.699 "name": "Nvme$subsystem", 00:24:45.699 "trtype": "$TEST_TRANSPORT", 00:24:45.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.699 "adrfam": "ipv4", 00:24:45.699 "trsvcid": "$NVMF_PORT", 00:24:45.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.699 "hdgst": ${hdgst:-false}, 00:24:45.699 "ddgst": ${ddgst:-false} 00:24:45.699 }, 00:24:45.699 "method": "bdev_nvme_attach_controller" 00:24:45.699 } 00:24:45.699 EOF 00:24:45.699 )") 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:45.699 16:41:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:45.699 "params": { 00:24:45.699 "name": "Nvme1", 00:24:45.699 "trtype": "tcp", 00:24:45.699 "traddr": "10.0.0.2", 00:24:45.699 "adrfam": "ipv4", 00:24:45.699 "trsvcid": "4420", 00:24:45.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.699 "hdgst": false, 00:24:45.699 "ddgst": false 00:24:45.699 }, 00:24:45.699 "method": "bdev_nvme_attach_controller" 00:24:45.699 }' 00:24:45.699 [2024-07-15 16:41:25.030667] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:45.699 [2024-07-15 16:41:25.030756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617641 ] 00:24:45.699 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.699 [2024-07-15 16:41:25.089883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.699 [2024-07-15 16:41:25.200136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.957 Running I/O for 15 seconds... 00:24:48.486 16:41:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1617225 00:24:48.486 16:41:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:48.486 [2024-07-15 16:41:27.999986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.486 [2024-07-15 16:41:28.000597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.486 [2024-07-15 16:41:28.000612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.000982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.487 [2024-07-15 16:41:28.001392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.487 [2024-07-15 16:41:28.001974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.487 [2024-07-15 16:41:28.001988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.488 [2024-07-15 16:41:28.002647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.002971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.488 [2024-07-15 16:41:28.003321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.488 [2024-07-15 16:41:28.003335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.003976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.003990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.489 [2024-07-15 16:41:28.004173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9064c0 is same with the state(5) to be set 00:24:48.489 [2024-07-15 16:41:28.004208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.489 [2024-07-15 16:41:28.004220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.489 [2024-07-15 16:41:28.004232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50264 len:8 PRP1 0x0 PRP2 0x0 00:24:48.489 [2024-07-15 16:41:28.004246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.489 [2024-07-15 16:41:28.004314] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9064c0 was disconnected and freed. reset controller. 00:24:48.489 [2024-07-15 16:41:28.008125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.489 [2024-07-15 16:41:28.008209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.489 [2024-07-15 16:41:28.008949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.489 [2024-07-15 16:41:28.008978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.489 [2024-07-15 16:41:28.008994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.489 [2024-07-15 16:41:28.009229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.489 [2024-07-15 16:41:28.009476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.489 [2024-07-15 16:41:28.009500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.489 [2024-07-15 16:41:28.009517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.489 [2024-07-15 16:41:28.013109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.489 [2024-07-15 16:41:28.022299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.489 [2024-07-15 16:41:28.022748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.489 [2024-07-15 16:41:28.022778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.489 [2024-07-15 16:41:28.022796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.489 [2024-07-15 16:41:28.023060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.489 [2024-07-15 16:41:28.023311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.489 [2024-07-15 16:41:28.023335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.489 [2024-07-15 16:41:28.023350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.489 [2024-07-15 16:41:28.026893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.489 [2024-07-15 16:41:28.036328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.489 [2024-07-15 16:41:28.036759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.489 [2024-07-15 16:41:28.036792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.489 [2024-07-15 16:41:28.036810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.489 [2024-07-15 16:41:28.037069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.489 [2024-07-15 16:41:28.037313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.489 [2024-07-15 16:41:28.037336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.489 [2024-07-15 16:41:28.037351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.489 [2024-07-15 16:41:28.040924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.489 [2024-07-15 16:41:28.050177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.489 [2024-07-15 16:41:28.050736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.489 [2024-07-15 16:41:28.050789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.489 [2024-07-15 16:41:28.050807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.490 [2024-07-15 16:41:28.051054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.490 [2024-07-15 16:41:28.051296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.490 [2024-07-15 16:41:28.051318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.490 [2024-07-15 16:41:28.051333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.490 [2024-07-15 16:41:28.054904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.490 [2024-07-15 16:41:28.064145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.490 [2024-07-15 16:41:28.064696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.490 [2024-07-15 16:41:28.064746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.490 [2024-07-15 16:41:28.064763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.490 [2024-07-15 16:41:28.065008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.490 [2024-07-15 16:41:28.065268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.490 [2024-07-15 16:41:28.065291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.490 [2024-07-15 16:41:28.065306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.490 [2024-07-15 16:41:28.068862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.490 [2024-07-15 16:41:28.078130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.490 [2024-07-15 16:41:28.078715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.490 [2024-07-15 16:41:28.078777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.490 [2024-07-15 16:41:28.078794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.490 [2024-07-15 16:41:28.079041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.490 [2024-07-15 16:41:28.079283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.490 [2024-07-15 16:41:28.079305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.490 [2024-07-15 16:41:28.079320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.749 [2024-07-15 16:41:28.083043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.749 [2024-07-15 16:41:28.092097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.092551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.092582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.092599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.092836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.093090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.093113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.093128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.096687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.105944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.106382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.106413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.106436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.106673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.106927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.106951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.106966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.110526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.119779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.120233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.120264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.120281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.120517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.120757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.120779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.120795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.124366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.133623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.134049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.134080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.134098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.134335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.134575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.134598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.134612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.138182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.147670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.148120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.148150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.148167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.148404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.148644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.148672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.148688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.152254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.161496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.161933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.161964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.161981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.162217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.162458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.162480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.162495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.166061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.175509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.175962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.175993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.176010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.176246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.176486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.176509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.176524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.180090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.189535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.189974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.190004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.190021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.190257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.190497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.190520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.190534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.194101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.203547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.203989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.204019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.204036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.204273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.204513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.204535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.750 [2024-07-15 16:41:28.204550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.750 [2024-07-15 16:41:28.208118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.750 [2024-07-15 16:41:28.217564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.750 [2024-07-15 16:41:28.218003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.750 [2024-07-15 16:41:28.218034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.750 [2024-07-15 16:41:28.218051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.750 [2024-07-15 16:41:28.218287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.750 [2024-07-15 16:41:28.218527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.750 [2024-07-15 16:41:28.218550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.218564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.222131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.231582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.232000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.232031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.232048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.232283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.232524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.232547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.232561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.236127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.245574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.245990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.246021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.246038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.246280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.246521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.246543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.246558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.250126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.259574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.259987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.260018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.260036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.260273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.260513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.260536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.260550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.264117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.273574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.274004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.274037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.274054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.274291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.274532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.274554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.274569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.278137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.287601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.288050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.288098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.288335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.288576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.288599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.288620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.292187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.301426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.301868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.301904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.301922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.302159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.302399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.302422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.302437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.305999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.315452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.315847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.315884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.315903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.316141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.316381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.316404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.316418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.319982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.329436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.329987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.330018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.330035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.330272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.330513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.330535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.330550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.751 [2024-07-15 16:41:28.334116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.751 [2024-07-15 16:41:28.343377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.751 [2024-07-15 16:41:28.343824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.751 [2024-07-15 16:41:28.343860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:48.751 [2024-07-15 16:41:28.343885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:48.751 [2024-07-15 16:41:28.344142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:48.751 [2024-07-15 16:41:28.344383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.751 [2024-07-15 16:41:28.344406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.751 [2024-07-15 16:41:28.344420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.347983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.357248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.357685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.357716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.357734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.357981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.358222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.358244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.358259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.361819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.371086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.371587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.371642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.371659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.371906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.372146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.372169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.372184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.375741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.384993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.385411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.385441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.385459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.385696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.385953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.385977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.385992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.389547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.399016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.399461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.399491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.399509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.399745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.399995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.400018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.400033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.403626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.412901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.413346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.413377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.413394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.413630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.413872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.413906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.413921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.417476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.426734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.427187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.427218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.427235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.427471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.427711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.427734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.427749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.431320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.440561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.440978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.441015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.441033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.441269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.441509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.441532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.441546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.445123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.012 [2024-07-15 16:41:28.454582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.012 [2024-07-15 16:41:28.455029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.012 [2024-07-15 16:41:28.455060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.012 [2024-07-15 16:41:28.455077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.012 [2024-07-15 16:41:28.455313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.012 [2024-07-15 16:41:28.455554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.012 [2024-07-15 16:41:28.455577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.012 [2024-07-15 16:41:28.455591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.012 [2024-07-15 16:41:28.459163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.468450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.468897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.468930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.468950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.469187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.469428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.469451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.469465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.473037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.482305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.482766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.482796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.482819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.483070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.483311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.483334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.483348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.486916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.496162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.496600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.496647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.496895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.497136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.497159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.497173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.500733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.509991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.510428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.510458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.510475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.510711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.510961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.510985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.510999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.514562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.523815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.524276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.524306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.524323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.524560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.524800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.524832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.524847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.528422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.537673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.538126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.538157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.538174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.538411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.538651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.538674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.538688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.542273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.551523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.551938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.551970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.551987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.552224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.552464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.552487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.552502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.556075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.565543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.565983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.566013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.566031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.566267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.566507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.566530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.566544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.570116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.579364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.579784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.579814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.579832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.580079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.580320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.580342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.580357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.583928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.593383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.593832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.593862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.593889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.594128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.594369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.594391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.594406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.013 [2024-07-15 16:41:28.597972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.013 [2024-07-15 16:41:28.607226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.013 [2024-07-15 16:41:28.607669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.013 [2024-07-15 16:41:28.607699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.013 [2024-07-15 16:41:28.607716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.013 [2024-07-15 16:41:28.607966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.013 [2024-07-15 16:41:28.608207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.013 [2024-07-15 16:41:28.608230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.013 [2024-07-15 16:41:28.608245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.611806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.621068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.621511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.621540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.621557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.621800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.274 [2024-07-15 16:41:28.622053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.274 [2024-07-15 16:41:28.622076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.274 [2024-07-15 16:41:28.622091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.625651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.634904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.635319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.635349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.635367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.635603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.274 [2024-07-15 16:41:28.635843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.274 [2024-07-15 16:41:28.635865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.274 [2024-07-15 16:41:28.635891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.639453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.648921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.649335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.649366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.649383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.649619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.274 [2024-07-15 16:41:28.649860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.274 [2024-07-15 16:41:28.649895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.274 [2024-07-15 16:41:28.649911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.653476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.662943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.663380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.663411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.663428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.663664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.274 [2024-07-15 16:41:28.663917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.274 [2024-07-15 16:41:28.663941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.274 [2024-07-15 16:41:28.663962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.667522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.676770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.677193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.677223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.677240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.677476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.274 [2024-07-15 16:41:28.677716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.274 [2024-07-15 16:41:28.677739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.274 [2024-07-15 16:41:28.677753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.274 [2024-07-15 16:41:28.681318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.274 [2024-07-15 16:41:28.690767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.274 [2024-07-15 16:41:28.691201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.274 [2024-07-15 16:41:28.691232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.274 [2024-07-15 16:41:28.691249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.274 [2024-07-15 16:41:28.691486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.691726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.691748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.691762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.695334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.704790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.705233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.705263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.705280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.705517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.705757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.705779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.705794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.709363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.718610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.719066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.719102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.719119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.719356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.719596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.719619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.719633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.723210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.732458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.732872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.732908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.732925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.733162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.733402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.733424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.733439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.737008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.746467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.746858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.746895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.746913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.747150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.747389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.747411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.747426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.750995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.760458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.760889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.760920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.760937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.761173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.761419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.761442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.761456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.765025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.774483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.774897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.774927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.774944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.775181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.775421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.775443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.775458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.779026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.788493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.788911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.788942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.788959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.789195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.789435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.789457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.789472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.793043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.802509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.802966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.802996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.803013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.803249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.803489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.803511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.803526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.807103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.816346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.816759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.816789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.816806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.817054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.817295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.817318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.817333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.820896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.830349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.830772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.275 [2024-07-15 16:41:28.830802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.275 [2024-07-15 16:41:28.830819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.275 [2024-07-15 16:41:28.831067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.275 [2024-07-15 16:41:28.831308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.275 [2024-07-15 16:41:28.831331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.275 [2024-07-15 16:41:28.831345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.275 [2024-07-15 16:41:28.834912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.275 [2024-07-15 16:41:28.844371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.275 [2024-07-15 16:41:28.844791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.276 [2024-07-15 16:41:28.844821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.276 [2024-07-15 16:41:28.844837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.276 [2024-07-15 16:41:28.845085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.276 [2024-07-15 16:41:28.845326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.276 [2024-07-15 16:41:28.845348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.276 [2024-07-15 16:41:28.845363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.276 [2024-07-15 16:41:28.848931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.276 [2024-07-15 16:41:28.858381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.276 [2024-07-15 16:41:28.858823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.276 [2024-07-15 16:41:28.858852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.276 [2024-07-15 16:41:28.858874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.276 [2024-07-15 16:41:28.859124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.276 [2024-07-15 16:41:28.859364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.276 [2024-07-15 16:41:28.859387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.276 [2024-07-15 16:41:28.859402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.276 [2024-07-15 16:41:28.862969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.872227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.872671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.872701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.872719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.872974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.873216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.873239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.873253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.876816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.886071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.886488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.886518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.886536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.886772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.887024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.887047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.887062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.890621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.900081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.900504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.900534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.900551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.900788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.901041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.901071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.901086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.904646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.914111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.914676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.914707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.914725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.914974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.915216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.915239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.915253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.918813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.928069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.928508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.928538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.928555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.928791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.929043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.929066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.929081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.932641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.941898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.942335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.942365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.942382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.942619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.942859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.942892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.942908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.946467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.955712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.956185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.956217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.956236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.956476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.956719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.956742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.956759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.960335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.969596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.970031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.970062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.536 [2024-07-15 16:41:28.970079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.536 [2024-07-15 16:41:28.970315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.536 [2024-07-15 16:41:28.970555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.536 [2024-07-15 16:41:28.970578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.536 [2024-07-15 16:41:28.970593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.536 [2024-07-15 16:41:28.974167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.536 [2024-07-15 16:41:28.983620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.536 [2024-07-15 16:41:28.984070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.536 [2024-07-15 16:41:28.984100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:28.984117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:28.984354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:28.984593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:28.984616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:28.984631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:28.988202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:28.997446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:28.997892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:28.997922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:28.997940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:28.998182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:28.998423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:28.998445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:28.998460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.002033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.011300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.011745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.011775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.011792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.012039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.012280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.012302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.012317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.015882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.025131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.025539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.025571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.025588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.025824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.026076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.026100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.026114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.029746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.039000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.039412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.039458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.039695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.039947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.039971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.039991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.043556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.053029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.053447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.053487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.053504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.053741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.053994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.054017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.054032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.057593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.066851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.067320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.067351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.067368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.067605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.067845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.067867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.067893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.071455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.080734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.081187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.081218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.081235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.081471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.081712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.081736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.081751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.085326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.094619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.095048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.095087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.095105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.095343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.095583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.095607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.095621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.099200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.108563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.108975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.109013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.109030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.109267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.109507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.109530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.109545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.113118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.537 [2024-07-15 16:41:29.122577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.537 [2024-07-15 16:41:29.122981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.537 [2024-07-15 16:41:29.123013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.537 [2024-07-15 16:41:29.123031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.537 [2024-07-15 16:41:29.123268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.537 [2024-07-15 16:41:29.123508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.537 [2024-07-15 16:41:29.123531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.537 [2024-07-15 16:41:29.123546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.537 [2024-07-15 16:41:29.127110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.804 [2024-07-15 16:41:29.136574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.804 [2024-07-15 16:41:29.136994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.804 [2024-07-15 16:41:29.137025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.137042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.137278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.137525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.137548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.137563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.141135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.150590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.150995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.151026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.151043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.151280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.151519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.151542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.151556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.155125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.164570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.164994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.165043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.165279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.165520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.165543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.165557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.169126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.178592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.178994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.179025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.179042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.179278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.179518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.179541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.179556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.183131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.192618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.193061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.193093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.193110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.193347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.193587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.193610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.193624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.197200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.206461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.206874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.206913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.206930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.207167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.207407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.207429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.207444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.211016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.220491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.220909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.220939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.220957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.221193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.221433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.221455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.221470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.225042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.234523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.234960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.234991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.235017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.235256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.235496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.235518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.235532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.239105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.248355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.248767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.248797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.248814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.249062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.249303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.249325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.249340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.252905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.262368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.262782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.262812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.262829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.263076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.263317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.263340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.263355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.266924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.276380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.276818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.276848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.805 [2024-07-15 16:41:29.276865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.805 [2024-07-15 16:41:29.277112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.805 [2024-07-15 16:41:29.277353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.805 [2024-07-15 16:41:29.277381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.805 [2024-07-15 16:41:29.277396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.805 [2024-07-15 16:41:29.280964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.805 [2024-07-15 16:41:29.290210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.805 [2024-07-15 16:41:29.290647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.805 [2024-07-15 16:41:29.290676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.290693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.290942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.291183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.291206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.291221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.294778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.304044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.304491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.304522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.304540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.304776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.305026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.305049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.305064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.308642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.318903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.319394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.319435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.319462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.319764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.320084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.320115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.320138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.324583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.333717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.334245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.334286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.334312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.334589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.334872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.334916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.334940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.339354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.348498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.349010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.349052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.349078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.349375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.349653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.349683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.349715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.354204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.363423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.363960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.364002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.364028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.364309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.364613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.364643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.364668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.369166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.378341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.378862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.378912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.378947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.379235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.379547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.379577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.379606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.806 [2024-07-15 16:41:29.384069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.806 [2024-07-15 16:41:29.393303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.806 [2024-07-15 16:41:29.393816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.806 [2024-07-15 16:41:29.393858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:49.806 [2024-07-15 16:41:29.393903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:49.806 [2024-07-15 16:41:29.394208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:49.806 [2024-07-15 16:41:29.394516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.806 [2024-07-15 16:41:29.394546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.806 [2024-07-15 16:41:29.394568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.398970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.408177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.408705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.408747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.408795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.409127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.409430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.409460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.409483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.413866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.423123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.423648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.423688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.423714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.424007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.424292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.424322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.424358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.428838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.438033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.438569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.438608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.438632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.438933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.439276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.439308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.439330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.443808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.453015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.453556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.453600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.453630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.453940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.454243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.454273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.454295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.458711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.467938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.468429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.468470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.468499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.468795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.469090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.469122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.469142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.473608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.482803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.483345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.483393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.483424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.483750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.484071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.484104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.484127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.488540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.497715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.498222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.498303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.498347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.498659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.498979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.499010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.499035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.503473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.512746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.513277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.513319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.513344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.101 [2024-07-15 16:41:29.513643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.101 [2024-07-15 16:41:29.513965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.101 [2024-07-15 16:41:29.513998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.101 [2024-07-15 16:41:29.514021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.101 [2024-07-15 16:41:29.518485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.101 [2024-07-15 16:41:29.527659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.101 [2024-07-15 16:41:29.528171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.101 [2024-07-15 16:41:29.528216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.101 [2024-07-15 16:41:29.528243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.528548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.528854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.528897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.528924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.533388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.542615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.543160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.543203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.543229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.543525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.543804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.543834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.543856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.548342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.557535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.558042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.558082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.558108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.558383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.558672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.558721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.558750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.563231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.572454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.573012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.573064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.573092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.573394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.573697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.573727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.573750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.578181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.587396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.587893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.587939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.587966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.588266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.588567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.588597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.588619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.593049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.602274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.602800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.602843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.602868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.603185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.603494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.603526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.603549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.608007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.617235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.617703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.617743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.617769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.618086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.618397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.618428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.618450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.622904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.632075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.632599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.632643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.632681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.633008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.633323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.633358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.633385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.637797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.646996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.647544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.647584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.647611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.647924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.648224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.648257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.648282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.652696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.661908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.662400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.662441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.662467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.662761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.663086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.663122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.663149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.102 [2024-07-15 16:41:29.667594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.102 [2024-07-15 16:41:29.676768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.102 [2024-07-15 16:41:29.677316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.102 [2024-07-15 16:41:29.677357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.102 [2024-07-15 16:41:29.677384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.102 [2024-07-15 16:41:29.677691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.102 [2024-07-15 16:41:29.678032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.102 [2024-07-15 16:41:29.678073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.102 [2024-07-15 16:41:29.678098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.363 [2024-07-15 16:41:29.682510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.363 [2024-07-15 16:41:29.691705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.363 [2024-07-15 16:41:29.692206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.363 [2024-07-15 16:41:29.692248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.363 [2024-07-15 16:41:29.692276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.363 [2024-07-15 16:41:29.692582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.363 [2024-07-15 16:41:29.692902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.363 [2024-07-15 16:41:29.692933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.363 [2024-07-15 16:41:29.692956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.363 [2024-07-15 16:41:29.697432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.363 [2024-07-15 16:41:29.706606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.363 [2024-07-15 16:41:29.707131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.363 [2024-07-15 16:41:29.707171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.363 [2024-07-15 16:41:29.707197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.363 [2024-07-15 16:41:29.707473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.363 [2024-07-15 16:41:29.707762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.363 [2024-07-15 16:41:29.707792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.363 [2024-07-15 16:41:29.707818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.363 [2024-07-15 16:41:29.712329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.363 [2024-07-15 16:41:29.721506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.363 [2024-07-15 16:41:29.722007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.363 [2024-07-15 16:41:29.722046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.363 [2024-07-15 16:41:29.722069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.363 [2024-07-15 16:41:29.722354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.363 [2024-07-15 16:41:29.722671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.363 [2024-07-15 16:41:29.722704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.363 [2024-07-15 16:41:29.722729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.363 [2024-07-15 16:41:29.727215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.363 [2024-07-15 16:41:29.736398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.363 [2024-07-15 16:41:29.736934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.363 [2024-07-15 16:41:29.736976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.363 [2024-07-15 16:41:29.737002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.363 [2024-07-15 16:41:29.737302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.363 [2024-07-15 16:41:29.737601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.363 [2024-07-15 16:41:29.737631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.363 [2024-07-15 16:41:29.737653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.363 [2024-07-15 16:41:29.742091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.751278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.751805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.751846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.751872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.752184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.752485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.752516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.752538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.757014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.766183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.766679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.766722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.766757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.767080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.767383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.767413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.767435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.771895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.781051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.781565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.781607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.781639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.781965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.782264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.782314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.782362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.786815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.796028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.796550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.796591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.796617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.796927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.797208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.797239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.797259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.801731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.810899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.811421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.811462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.811488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.811772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.812068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.812099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.812121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.816594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.825780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.826326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.826368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.826395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.826701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.827020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.827052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.827083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.831485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.840706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.841258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.841304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.841332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.841634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.841950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.841981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.842005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.846409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.855650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.856168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.856208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.856233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.856523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.856840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.856871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.856912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.861369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.870534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.871025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.871065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.871090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.871380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.871709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.871740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.871762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.876250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.885450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.885953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.885995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.886021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.886324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.886623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.886654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.364 [2024-07-15 16:41:29.886677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.364 [2024-07-15 16:41:29.891108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.364 [2024-07-15 16:41:29.900285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.364 [2024-07-15 16:41:29.900786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.364 [2024-07-15 16:41:29.900827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.364 [2024-07-15 16:41:29.900854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.364 [2024-07-15 16:41:29.901165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.364 [2024-07-15 16:41:29.901466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.364 [2024-07-15 16:41:29.901496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.365 [2024-07-15 16:41:29.901518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.365 [2024-07-15 16:41:29.905963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.365 [2024-07-15 16:41:29.915186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.365 [2024-07-15 16:41:29.915687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.365 [2024-07-15 16:41:29.915734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.365 [2024-07-15 16:41:29.915784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.365 [2024-07-15 16:41:29.916109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.365 [2024-07-15 16:41:29.916414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.365 [2024-07-15 16:41:29.916445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.365 [2024-07-15 16:41:29.916467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.365 [2024-07-15 16:41:29.920931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.365 [2024-07-15 16:41:29.930090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.365 [2024-07-15 16:41:29.930589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.365 [2024-07-15 16:41:29.930637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.365 [2024-07-15 16:41:29.930665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.365 [2024-07-15 16:41:29.930985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.365 [2024-07-15 16:41:29.931292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.365 [2024-07-15 16:41:29.931324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.365 [2024-07-15 16:41:29.931353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.365 [2024-07-15 16:41:29.935810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.365 [2024-07-15 16:41:29.945043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.365 [2024-07-15 16:41:29.945574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.365 [2024-07-15 16:41:29.945615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.365 [2024-07-15 16:41:29.945641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.365 [2024-07-15 16:41:29.945954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.365 [2024-07-15 16:41:29.946249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.365 [2024-07-15 16:41:29.946280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.365 [2024-07-15 16:41:29.946301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.365 [2024-07-15 16:41:29.950740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:29.959972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:29.960496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:29.960536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:29.960562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:29.960857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:29.961154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:29.961185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:29.961205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:29.965682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:29.974897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:29.975427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:29.975469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:29.975495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:29.975794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:29.976117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:29.976149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:29.976172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:29.980591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:29.989775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:29.990324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:29.990365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:29.990392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:29.990696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:29.991015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:29.991047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:29.991070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:29.995470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:30.004725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:30.005277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:30.005320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:30.005346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:30.005621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:30.005938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:30.005971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:30.005996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:30.010451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:30.019482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:30.019961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:30.019999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:30.020023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:30.020294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:30.020532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:30.020557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:30.020575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:30.024318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:30.033395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:30.033998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:30.034039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:30.034076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:30.034365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:30.034594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:30.034619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:30.034637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:30.038302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:30.047627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:30.048104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:30.048155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:30.048179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:30.048468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:30.048725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:30.048749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:30.048768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.625 [2024-07-15 16:41:30.052317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.625 [2024-07-15 16:41:30.061587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.625 [2024-07-15 16:41:30.062093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.625 [2024-07-15 16:41:30.062142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.625 [2024-07-15 16:41:30.062170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.625 [2024-07-15 16:41:30.062478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.625 [2024-07-15 16:41:30.062720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.625 [2024-07-15 16:41:30.062744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.625 [2024-07-15 16:41:30.062763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.066514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.075589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.076053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.076102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.076147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.076436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.076678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.076707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.076726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.080394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.089418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.089949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.089988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.090012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.090305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.090551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.090576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.090594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.094166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.103394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.103882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.103932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.103954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.104226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.104485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.104508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.104526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.108133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.117387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.117898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.117956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.117989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.118292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.118533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.118556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.118573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.122139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.131388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.131931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.131982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.132006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.132288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.132507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.132530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.132547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.136139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.145413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.145980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.146029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.146051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.146349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.146617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.146646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.146668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.150269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.159269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.159728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.159762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.159799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.160127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.160381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.160405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.160423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.163998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.173213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.173829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.173888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.173912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.174211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.174443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.174468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.174484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.178063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.187087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.187723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.187774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.187798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.188123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.188378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.188402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.188420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.192040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.200885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.201320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.201355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.201378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.626 [2024-07-15 16:41:30.201649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.626 [2024-07-15 16:41:30.201916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.626 [2024-07-15 16:41:30.201942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.626 [2024-07-15 16:41:30.201961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.626 [2024-07-15 16:41:30.205485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.626 [2024-07-15 16:41:30.214811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.626 [2024-07-15 16:41:30.215323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.626 [2024-07-15 16:41:30.215376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.626 [2024-07-15 16:41:30.215404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.627 [2024-07-15 16:41:30.215690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.627 [2024-07-15 16:41:30.215992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.627 [2024-07-15 16:41:30.216038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.627 [2024-07-15 16:41:30.216067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.627 [2024-07-15 16:41:30.220008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.887 [2024-07-15 16:41:30.229001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.887 [2024-07-15 16:41:30.229575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.887 [2024-07-15 16:41:30.229611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.887 [2024-07-15 16:41:30.229633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.887 [2024-07-15 16:41:30.229931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.887 [2024-07-15 16:41:30.230215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.230241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.230259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.234082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.242842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.243454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.243510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.243537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.243839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.244127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.244154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.244188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.247757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.256897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.257368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.257417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.257441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.257724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.257979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.258005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.258023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.261631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.271101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.271625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.271674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.271698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.271983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.272263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.272303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.272328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.276069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.285240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.285777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.285827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.285852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.286153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.286421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.286446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.286463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.290051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.299189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.299690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.299726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.299750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.300061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.300339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.300376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.300393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.304012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.313164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.313669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.313718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.313740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.314030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.314303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.314327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.314346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.317924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.327053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.327605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.327656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.327680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.327995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.328254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.328279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.328298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.331846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.340989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.341494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.341544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.341576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.341902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.342148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.342173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.342192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.345783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.354972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.355438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.355473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.355495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.355768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.356042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.356069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.356088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.359722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.368892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.369351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.369399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.888 [2024-07-15 16:41:30.369420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.888 [2024-07-15 16:41:30.369679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.888 [2024-07-15 16:41:30.369968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.888 [2024-07-15 16:41:30.369994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.888 [2024-07-15 16:41:30.370013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.888 [2024-07-15 16:41:30.373626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.888 [2024-07-15 16:41:30.382739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.888 [2024-07-15 16:41:30.383257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.888 [2024-07-15 16:41:30.383307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.383329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.383614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.383897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.383950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.383971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.387657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.396772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.397407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.397458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.397483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.397774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.398051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.398082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.398105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.401684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.410751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.411256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.411290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.411334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.411622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.411841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.411865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.411909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.415521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.424620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.425222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.425272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.425295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.425568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.425812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.425837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.425872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.429510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.438554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.439062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.439114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.439139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.439427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.439665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.439690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.439708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.443342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.452493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.453103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.453155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.453179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.453468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.453691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.453720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.453738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.457428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.466288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.466767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.466803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.466826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.467137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.467412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.467437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.467456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.889 [2024-07-15 16:41:30.471005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.889 [2024-07-15 16:41:30.480503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.889 [2024-07-15 16:41:30.481011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.889 [2024-07-15 16:41:30.481051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:50.889 [2024-07-15 16:41:30.481076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:50.889 [2024-07-15 16:41:30.481383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:50.889 [2024-07-15 16:41:30.481626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.889 [2024-07-15 16:41:30.481650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.889 [2024-07-15 16:41:30.481668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.485562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.494455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.494996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.495034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.495058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.495341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.150 [2024-07-15 16:41:30.495570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.150 [2024-07-15 16:41:30.495593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.150 [2024-07-15 16:41:30.495610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.499212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.508408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.509033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.509067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.509088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.509355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.150 [2024-07-15 16:41:30.509624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.150 [2024-07-15 16:41:30.509653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.150 [2024-07-15 16:41:30.509677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.513295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.522576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.523032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.523069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.523094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.523380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.150 [2024-07-15 16:41:30.523621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.150 [2024-07-15 16:41:30.523645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.150 [2024-07-15 16:41:30.523663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.527373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.536549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.536998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.537059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.537340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.150 [2024-07-15 16:41:30.537586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.150 [2024-07-15 16:41:30.537610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.150 [2024-07-15 16:41:30.537629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.541259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.550590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.551063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.551113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.551136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.551438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.150 [2024-07-15 16:41:30.551687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.150 [2024-07-15 16:41:30.551711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.150 [2024-07-15 16:41:30.551729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.150 [2024-07-15 16:41:30.555354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.150 [2024-07-15 16:41:30.564470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.150 [2024-07-15 16:41:30.564988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.150 [2024-07-15 16:41:30.565055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.150 [2024-07-15 16:41:30.565080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.150 [2024-07-15 16:41:30.565375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.565614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.565638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.565655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.569280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.578356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.578961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.579010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.579033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.579309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.579534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.579558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.579594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.583280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.592358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.592798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.592844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.592866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.593163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.593427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.593462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.593490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.597101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.606246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.606744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.606792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.606815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.607123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.607386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.607414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.607436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.611014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.620120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.620644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.620694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.620717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.621009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.621276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.621301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.621319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.624917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.634103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.634601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.634657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.634693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.635021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.635301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.635325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.635344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.638924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.648085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.648642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.648690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.648713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.649006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.649288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.649312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.649335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.652997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.662051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.662527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.662562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.662584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.662885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.663140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.663165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.663199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.666808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.675916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.676418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.676468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.676492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.676770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.677056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.677081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.677100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.680772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.689630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.690120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.690149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.690165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.690397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.690597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.690616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.690628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.693606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.702897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.703387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.703413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.703442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.703688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.151 [2024-07-15 16:41:30.703903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.151 [2024-07-15 16:41:30.703937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.151 [2024-07-15 16:41:30.703950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.151 [2024-07-15 16:41:30.706949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.151 [2024-07-15 16:41:30.716047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.151 [2024-07-15 16:41:30.716605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.151 [2024-07-15 16:41:30.716646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.151 [2024-07-15 16:41:30.716663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.151 [2024-07-15 16:41:30.716902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.152 [2024-07-15 16:41:30.717099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.152 [2024-07-15 16:41:30.717118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.152 [2024-07-15 16:41:30.717130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.152 [2024-07-15 16:41:30.719919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.152 [2024-07-15 16:41:30.729074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.152 [2024-07-15 16:41:30.729501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.152 [2024-07-15 16:41:30.729542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.152 [2024-07-15 16:41:30.729558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.152 [2024-07-15 16:41:30.729802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.152 [2024-07-15 16:41:30.730022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.152 [2024-07-15 16:41:30.730042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.152 [2024-07-15 16:41:30.730054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.152 [2024-07-15 16:41:30.732842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.152 [2024-07-15 16:41:30.743014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.152 [2024-07-15 16:41:30.743460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.152 [2024-07-15 16:41:30.743491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.152 [2024-07-15 16:41:30.743508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.152 [2024-07-15 16:41:30.743744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.152 [2024-07-15 16:41:30.743997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.152 [2024-07-15 16:41:30.744021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.152 [2024-07-15 16:41:30.744036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.411 [2024-07-15 16:41:30.747596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.411 [2024-07-15 16:41:30.756853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.411 [2024-07-15 16:41:30.757303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.411 [2024-07-15 16:41:30.757333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.411 [2024-07-15 16:41:30.757351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.411 [2024-07-15 16:41:30.757587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.411 [2024-07-15 16:41:30.757826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.411 [2024-07-15 16:41:30.757849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.411 [2024-07-15 16:41:30.757864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.411 [2024-07-15 16:41:30.761432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.411 [2024-07-15 16:41:30.770684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.411 [2024-07-15 16:41:30.771147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.411 [2024-07-15 16:41:30.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.411 [2024-07-15 16:41:30.771196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.411 [2024-07-15 16:41:30.771433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.411 [2024-07-15 16:41:30.771673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.411 [2024-07-15 16:41:30.771695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.411 [2024-07-15 16:41:30.771709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.411 [2024-07-15 16:41:30.775275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.411 [2024-07-15 16:41:30.784525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.411 [2024-07-15 16:41:30.784964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.411 [2024-07-15 16:41:30.784995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.785019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.785257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.785498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.785520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.785535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.789105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.798357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.798809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.798839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.798857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.799103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.799345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.799367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.799382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.802946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.812182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.812619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.812649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.812666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.812913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.813154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.813177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.813192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.816748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.826204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.826641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.826687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.826937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.827178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.827206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.827222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.830777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.840030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.840462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.840492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.840508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.840745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.840999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.841023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.841038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.844595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.853840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.854261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.854292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.854308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.854545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.854785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.854808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.854822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.858387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.867844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.868264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.868295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.868312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.868548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.868788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.868811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.868825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.872392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.881847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.882301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.882331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.882349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.882585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.882825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.882847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.882862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.886430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.895890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.896305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.896335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.896353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.896589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.896829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.896852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.896866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.900433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.909890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.910300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.910331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.910348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.910584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.910824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.910846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.910861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.914428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.923894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.924307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.412 [2024-07-15 16:41:30.924337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.412 [2024-07-15 16:41:30.924355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.412 [2024-07-15 16:41:30.924598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.412 [2024-07-15 16:41:30.924839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.412 [2024-07-15 16:41:30.924863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.412 [2024-07-15 16:41:30.924891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.412 [2024-07-15 16:41:30.928454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.412 [2024-07-15 16:41:30.937916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.412 [2024-07-15 16:41:30.938328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:30.938358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:30.938375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.413 [2024-07-15 16:41:30.938611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.413 [2024-07-15 16:41:30.938851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.413 [2024-07-15 16:41:30.938874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.413 [2024-07-15 16:41:30.938902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.413 [2024-07-15 16:41:30.942465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.413 [2024-07-15 16:41:30.951927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.413 [2024-07-15 16:41:30.952363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:30.952393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:30.952410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.413 [2024-07-15 16:41:30.952647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.413 [2024-07-15 16:41:30.952898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.413 [2024-07-15 16:41:30.952921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.413 [2024-07-15 16:41:30.952936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.413 [2024-07-15 16:41:30.956494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.413 [2024-07-15 16:41:30.965738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.413 [2024-07-15 16:41:30.966182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:30.966212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:30.966229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.413 [2024-07-15 16:41:30.966465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.413 [2024-07-15 16:41:30.966705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.413 [2024-07-15 16:41:30.966728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.413 [2024-07-15 16:41:30.966748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.413 [2024-07-15 16:41:30.970316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.413 [2024-07-15 16:41:30.979559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.413 [2024-07-15 16:41:30.979984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:30.980016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:30.980033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.413 [2024-07-15 16:41:30.980270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.413 [2024-07-15 16:41:30.980511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.413 [2024-07-15 16:41:30.980534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.413 [2024-07-15 16:41:30.980548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.413 [2024-07-15 16:41:30.984117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.413 [2024-07-15 16:41:30.993567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1617225 Killed "${NVMF_APP[@]}" "$@" 00:24:51.413 [2024-07-15 16:41:30.994008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:30.994039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:30.994056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:51.413 [2024-07-15 16:41:30.994293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.413 [2024-07-15 16:41:30.994533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.413 [2024-07-15 16:41:30.994557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.413 [2024-07-15 16:41:30.994572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.413 [2024-07-15 16:41:30.998140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1618309 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1618309 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1618309 ']' 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.413 16:41:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.413 [2024-07-15 16:41:31.007401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.413 [2024-07-15 16:41:31.007838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.413 [2024-07-15 16:41:31.007868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.413 [2024-07-15 16:41:31.007895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.673 [2024-07-15 16:41:31.008133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.673 [2024-07-15 16:41:31.008374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.673 [2024-07-15 16:41:31.008397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.673 [2024-07-15 16:41:31.008412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.673 [2024-07-15 16:41:31.011977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.673 [2024-07-15 16:41:31.021228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.673 [2024-07-15 16:41:31.021663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.673 [2024-07-15 16:41:31.021693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.673 [2024-07-15 16:41:31.021711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.673 [2024-07-15 16:41:31.021957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.673 [2024-07-15 16:41:31.022198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.673 [2024-07-15 16:41:31.022221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.673 [2024-07-15 16:41:31.022236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.673 [2024-07-15 16:41:31.025793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.673 [2024-07-15 16:41:31.035063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.673 [2024-07-15 16:41:31.035484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.673 [2024-07-15 16:41:31.035515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.673 [2024-07-15 16:41:31.035532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.673 [2024-07-15 16:41:31.035769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.673 [2024-07-15 16:41:31.036020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.036043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.036058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.039617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.048979] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:24:51.674 [2024-07-15 16:41:31.049050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.674 [2024-07-15 16:41:31.049083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.049491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.049521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.049538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.049775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.050026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.050050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.050065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.053620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.063076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.063523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.063553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.063571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.063807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.064057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.064080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.064095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.067650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.076299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.076722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.076749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.076765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.077016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.077234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.077253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.077265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.080325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.674 [2024-07-15 16:41:31.089601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.090007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.090040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.090057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.090286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.090501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.090520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.090532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.093609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.102783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.103220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.103248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.103264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.103496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.103693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.103712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.103723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.106697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.116035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.116453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.116479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.116494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.116494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.674 [2024-07-15 16:41:31.116761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.116988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.117008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.117021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.119996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.129279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.129860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.129918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.129938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.130181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.130410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.130429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.130444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.133414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.142652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.143102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.143130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.143161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.143413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.143611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.143629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.143641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.674 [2024-07-15 16:41:31.146612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.674 [2024-07-15 16:41:31.156650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.674 [2024-07-15 16:41:31.157061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.674 [2024-07-15 16:41:31.157092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.674 [2024-07-15 16:41:31.157109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.674 [2024-07-15 16:41:31.157346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.674 [2024-07-15 16:41:31.157587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.674 [2024-07-15 16:41:31.157611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.674 [2024-07-15 16:41:31.157626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.161191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.170637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.171099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.171130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.171147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.171383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.171624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.171647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.171662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.175238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.184507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.185132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.185172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.185194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.185442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.185685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.185708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.185726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.189291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.198532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.198959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.198990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.199008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.199244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.199485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.199507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.199522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.203087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.212526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.212979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.213011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.213030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.213268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.213509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.213531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.213546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.217114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.226350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.226777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.226808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.226839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.227086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.227328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.227352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.227366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.230925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.233359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.675 [2024-07-15 16:41:31.233396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.675 [2024-07-15 16:41:31.233413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.675 [2024-07-15 16:41:31.233426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.675 [2024-07-15 16:41:31.233438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.675 [2024-07-15 16:41:31.233528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.675 [2024-07-15 16:41:31.233584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.675 [2024-07-15 16:41:31.233587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.675 [2024-07-15 16:41:31.240385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.240923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.240960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.240981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.241225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.241469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.241492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.241508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.245088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.254355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.254952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.254993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.255014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.255260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.255504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.255527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.255545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.675 [2024-07-15 16:41:31.259125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.675 [2024-07-15 16:41:31.268399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.675 [2024-07-15 16:41:31.268930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.675 [2024-07-15 16:41:31.268973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.675 [2024-07-15 16:41:31.268994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.675 [2024-07-15 16:41:31.269239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.675 [2024-07-15 16:41:31.269483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.675 [2024-07-15 16:41:31.269506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.675 [2024-07-15 16:41:31.269524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.936 [2024-07-15 16:41:31.273095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.936 [2024-07-15 16:41:31.282370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.936 [2024-07-15 16:41:31.282907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.936 [2024-07-15 16:41:31.282950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.282972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.283218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.283463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.283486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.283504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.287074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.296325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.296875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.296919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.296938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.297182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.297425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.297449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.297465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.301033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.310301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.310957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.311003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.311041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.311290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.311534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.311557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.311575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.315142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.324183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.324629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.324660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.324678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.324926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.325167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.325190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.325205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.328759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.338007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.338406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.338436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.338453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.338690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.338941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.338964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.338980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.342481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.351454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.351887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.351915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.351940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.352153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.352387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.352421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.352434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.355588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.937 [2024-07-15 16:41:31.364929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.365918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.365965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.365984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.366217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.366428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.366450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.366465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.369635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.378380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.378799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.378828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.378845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.379069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.379287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.379307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.379320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.937 [2024-07-15 16:41:31.382655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.386150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.937 [2024-07-15 16:41:31.391847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.937 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.937 [2024-07-15 16:41:31.392280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.937 [2024-07-15 16:41:31.392308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.937 [2024-07-15 16:41:31.392324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.937 [2024-07-15 16:41:31.392537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.937 [2024-07-15 16:41:31.392753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.937 [2024-07-15 16:41:31.392774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.937 [2024-07-15 16:41:31.392787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.937 [2024-07-15 16:41:31.396045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.937 [2024-07-15 16:41:31.405379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.937 [2024-07-15 16:41:31.405849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.938 [2024-07-15 16:41:31.405882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.938 [2024-07-15 16:41:31.405899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.938 [2024-07-15 16:41:31.406112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.938 [2024-07-15 16:41:31.406359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.938 [2024-07-15 16:41:31.406379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.938 [2024-07-15 16:41:31.406391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.938 [2024-07-15 16:41:31.409595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.938 [2024-07-15 16:41:31.418917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.938 [2024-07-15 16:41:31.419368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.938 [2024-07-15 16:41:31.419396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.938 [2024-07-15 16:41:31.419411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.938 [2024-07-15 16:41:31.419625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.938 [2024-07-15 16:41:31.419873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.938 [2024-07-15 16:41:31.419902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.938 [2024-07-15 16:41:31.419916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.938 [2024-07-15 16:41:31.423094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.938 [2024-07-15 16:41:31.432390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.938 [2024-07-15 16:41:31.432995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.938 [2024-07-15 16:41:31.433032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.938 [2024-07-15 16:41:31.433062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.938 [2024-07-15 16:41:31.433304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.938 [2024-07-15 16:41:31.433528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.938 [2024-07-15 16:41:31.433548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.938 [2024-07-15 16:41:31.433563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.938 Malloc0 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.938 [2024-07-15 16:41:31.436776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.938 [2024-07-15 16:41:31.446072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.938 [2024-07-15 16:41:31.446486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.938 [2024-07-15 16:41:31.446514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d5ac0 with addr=10.0.0.2, port=4420 00:24:51.938 [2024-07-15 16:41:31.446530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d5ac0 is same with the state(5) to be set 00:24:51.938 [2024-07-15 16:41:31.446757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d5ac0 (9): Bad file descriptor 00:24:51.938 [2024-07-15 16:41:31.447000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.938 [2024-07-15 16:41:31.447022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.938 [2024-07-15 16:41:31.447035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.938 [2024-07-15 16:41:31.450311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.938 [2024-07-15 16:41:31.455840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.938 [2024-07-15 16:41:31.459630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.938 16:41:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1617641 00:24:52.204 [2024-07-15 16:41:31.533231] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:02.183 00:25:02.183 Latency(us) 00:25:02.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.183 Verification LBA range: start 0x0 length 0x4000 00:25:02.183 Nvme1n1 : 15.00 6714.43 26.23 8363.49 0.00 8464.25 1146.88 18932.62 00:25:02.183 =================================================================================================================== 00:25:02.183 Total : 6714.43 26.23 8363.49 0.00 8464.25 1146.88 18932.62 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.183 rmmod nvme_tcp 00:25:02.183 rmmod nvme_fabrics 00:25:02.183 rmmod nvme_keyring 00:25:02.183 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1618309 ']' 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1618309 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1618309 ']' 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1618309 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618309 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618309' 00:25:02.184 killing process with pid 1618309 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1618309 00:25:02.184 16:41:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1618309 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.184 16:41:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.561 16:41:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.561 00:25:03.561 real 0m23.175s 00:25:03.561 user 1m2.833s 00:25:03.561 sys 0m4.148s 00:25:03.562 16:41:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.562 16:41:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.562 ************************************ 00:25:03.562 END TEST nvmf_bdevperf 00:25:03.562 ************************************ 00:25:03.819 16:41:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:03.820 16:41:43 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:03.820 16:41:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:03.820 16:41:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.820 16:41:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.820 ************************************ 00:25:03.820 START TEST nvmf_target_disconnect 00:25:03.820 ************************************ 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:03.820 * Looking for test storage... 00:25:03.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.820 16:41:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.722 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.723 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:25:05.981 00:25:05.981 --- 10.0.0.2 ping statistics --- 00:25:05.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.981 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:05.981 00:25:05.981 --- 10.0.0.1 ping statistics --- 00:25:05.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.981 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.981 16:41:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:05.982 ************************************ 00:25:05.982 START TEST nvmf_target_disconnect_tc1 00:25:05.982 ************************************ 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.982 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.982 [2024-07-15 16:41:45.468583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.982 [2024-07-15 16:41:45.468651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x225f1a0 with addr=10.0.0.2, port=4420 00:25:05.982 [2024-07-15 16:41:45.468684] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:05.982 [2024-07-15 16:41:45.468708] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.982 [2024-07-15 16:41:45.468722] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:05.982 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:05.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:05.982 Initializing NVMe Controllers 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.982 00:25:05.982 real 0m0.089s 00:25:05.982 user 0m0.044s 00:25:05.982 sys 0m0.045s 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.982 ************************************ 00:25:05.982 END TEST nvmf_target_disconnect_tc1 00:25:05.982 ************************************ 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:05.982 ************************************ 00:25:05.982 START TEST nvmf_target_disconnect_tc2 00:25:05.982 ************************************ 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1621458 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1621458 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1621458 ']' 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.982 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.982 [2024-07-15 16:41:45.574305] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:25:05.982 [2024-07-15 16:41:45.574390] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.240 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.240 [2024-07-15 16:41:45.644069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.240 [2024-07-15 16:41:45.751931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.240 [2024-07-15 16:41:45.751987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.240 [2024-07-15 16:41:45.752015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.240 [2024-07-15 16:41:45.752027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.240 [2024-07-15 16:41:45.752038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.240 [2024-07-15 16:41:45.752124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:06.240 [2024-07-15 16:41:45.752176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:06.240 [2024-07-15 16:41:45.752224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:06.240 [2024-07-15 16:41:45.752226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.499 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 Malloc0 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 [2024-07-15 16:41:45.916461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 [2024-07-15 16:41:45.944704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1621490 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.500 16:41:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:06.500 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.403 16:41:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1621458 00:25:08.403 16:41:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 [2024-07-15 16:41:47.968584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Read completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.403 Write completed with error (sct=0, sc=8) 00:25:08.403 starting I/O failed 00:25:08.404 [2024-07-15 16:41:47.968939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 [2024-07-15 16:41:47.969252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Read completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 Write completed with error (sct=0, sc=8) 00:25:08.404 starting I/O failed 00:25:08.404 [2024-07-15 16:41:47.969571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.404 [2024-07-15 16:41:47.969799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.969843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.970037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.970072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.970254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.970280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.970420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.970445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.970662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.970689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.970849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.970880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.971033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.971059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.971191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.971216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.971383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.971408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.971592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.971620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.971793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.971821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.972031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.972214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.972403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.972586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.972807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.404 [2024-07-15 16:41:47.972989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.404 [2024-07-15 16:41:47.973014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.404 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.973153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.973177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.973329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.973354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.973516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.973555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.973727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.973753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.973937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.973962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.974954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.974994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.975134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.975175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.975376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.975402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.975562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.975588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.975777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.975821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.975965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.975992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.976134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.976160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.976398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.976424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.976588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.976614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.976775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.976802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.976950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.976975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.977145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.977170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.977300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.977325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.977509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.977534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.977693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.977717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.977849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.977874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.978908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.978933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.979106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.979131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.979301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.979326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.979550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.979642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.979806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.979831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.405 [2024-07-15 16:41:47.979981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.405 [2024-07-15 16:41:47.980006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.405 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.980146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.980172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.980445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.980516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.980724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.980776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.980960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.980987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.981129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.981155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.981351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.981378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.981592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.981622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.981824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.981850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.982887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.982912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.983049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.983074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.983250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.983275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.983418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.983443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.983602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.983628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.983785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.983826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.984961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.985181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.985398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.985587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.985756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.985952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.985979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.986134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.986159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.986320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.986345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.986484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.986526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.986701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.986734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.986942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.986967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.987126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.987151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.987333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.987358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.987520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.987546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.987710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.987735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.987868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.987898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.988039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.988063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.988220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.988245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.988382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.988407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.406 [2024-07-15 16:41:47.988608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.406 [2024-07-15 16:41:47.988633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.406 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.988795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.988821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.988958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.988983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.989122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.989149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.989313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.989338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.989500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.989524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.989661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.989825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.989850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.990966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.990992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.991124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.991165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.991358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.991382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.991538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.991581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.991738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.991763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.991946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.991972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.992105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.992129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.992269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.992295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.992453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.992495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.992678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.992703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.992854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.992894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.993052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.993077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.993211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.993236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.993393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.993418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.993580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.993620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.993828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.993853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.994019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.994047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.994243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.994268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.994456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.994481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.994619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.994644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.994781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.994806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.995013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.995049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.995210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.995236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.995405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.995446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.995639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.407 [2024-07-15 16:41:47.995664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.407 qpair failed and we were unable to recover it. 00:25:08.407 [2024-07-15 16:41:47.995850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.995885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.996083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.996271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.996447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.996602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.996786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.996991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.997187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.997384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.997566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.997725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.997918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.997943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.998098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.998126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.408 [2024-07-15 16:41:47.998296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.408 [2024-07-15 16:41:47.998324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.408 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.998521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.998547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.998725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.998753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.998965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.998991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.999155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.999187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.999384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.999409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.999571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.687 [2024-07-15 16:41:47.999596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.687 qpair failed and we were unable to recover it. 00:25:08.687 [2024-07-15 16:41:47.999728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:47.999758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:47.999919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:47.999945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.000970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.000997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.001146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.001189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.001371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.001396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.001527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.001552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.001679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.001704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.001867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.001898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.002059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.002084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.002246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.002272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.002433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.002458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.002606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.002647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.002790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.002818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.003958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.003984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.004112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.004138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.004287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.004312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.004563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.004591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.004778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.004808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.004968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.004994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.005181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.005206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.005346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.005371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.005623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.005651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.005870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.005902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.006042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.006068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.006201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.006227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.006389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.006414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.006577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.006602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.006739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.006766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.007006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.007033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.007219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.007410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.688 [2024-07-15 16:41:48.007436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.688 qpair failed and we were unable to recover it. 00:25:08.688 [2024-07-15 16:41:48.007569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.007595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.007759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.007785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.007920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.007946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.008108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.008151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.008371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.008396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.008535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.008560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.008724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.008750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.008911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.008937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.009090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.009115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.009298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.009323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.009488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.009513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.009644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.009669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.009861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.009891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.010031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.010061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.010226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.010251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.010408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.010449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.010658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.010684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.010838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.010863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.011919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.011945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.012078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.012103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.012232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.012257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.012389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.012414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.012636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.012662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.012855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.012886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.013131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.013156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.013301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.013329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.013482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.013507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.013661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.013686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.013840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.013865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.014044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.014069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.014244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.014272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.014447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.014472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.014663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.014688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.014848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.014873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.015074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.015270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.689 [2024-07-15 16:41:48.015297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.689 qpair failed and we were unable to recover it. 00:25:08.689 [2024-07-15 16:41:48.015461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.015487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.015614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.015640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.015790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.015816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.015976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.016166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.016349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.016561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.016753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.016943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.016969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.017157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.017199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.017376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.017403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.017553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.017578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.017815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.017840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.018035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.018061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.018216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.018242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.018408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.018433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.018614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.018639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.018873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.018903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.019109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.019137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.019316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.019341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.019527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.019552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.019794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.019822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.019989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.020015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.020253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.020278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.020464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.020489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.020644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.020669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.020797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.020822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.020989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.021202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.021384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.021573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.021756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.021949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.021975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.022138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.022163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.022318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.022346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.022524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.690 [2024-07-15 16:41:48.022549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.690 qpair failed and we were unable to recover it. 00:25:08.690 [2024-07-15 16:41:48.022736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.022761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.022920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.022946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.023132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.023301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.023465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.023648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.023803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.023986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.024193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.024356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.024521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.024706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.024884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.024910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.025088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.025114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.025239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.025264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.025423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.025449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.025649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.025677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.025834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.026046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.026263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.026475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.026638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.026825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.026991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.027149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.027312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.027501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.027687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.027931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.027960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.028121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.028329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.028355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.028511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.028541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.028681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.028706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.028864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.028896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.029036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.029061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.029220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.029245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.029412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.029440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.029657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.029682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.029866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.029897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.030054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.030079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.030214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.030237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.030459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.030484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.030606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.030631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.030831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.030856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.031010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.031035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.031200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.031226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.031387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.691 [2024-07-15 16:41:48.031411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.691 qpair failed and we were unable to recover it. 00:25:08.691 [2024-07-15 16:41:48.031629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.031654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.031816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.031841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.031991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.032018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.032181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.032206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.032370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.032396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.032633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.032658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.032903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.032967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.033172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.033200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.033366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.033392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.033577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.033603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.033739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.033765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.033902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.033930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.034095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.034265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.034657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.034815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.034981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.035139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.035324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.035475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.035657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.035839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.035864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.036061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.036089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.036242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.036267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.036428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.036453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.036631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.036657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.036842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.036867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.037885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.037911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.038051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.038092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.038283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.038308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.038494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.038520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.038679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.038704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.038869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.038920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.039111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.039136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.039276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.039301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.039462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.039487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.039671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.039697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.039874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.039909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.040113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.040141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.040301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.040326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.040524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.040552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.040698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.040726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.040885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.040910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.041071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.041096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.041237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.041262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.041422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.041447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.041626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.041659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.692 [2024-07-15 16:41:48.041841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.692 [2024-07-15 16:41:48.041869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.692 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.042064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.042089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.042269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.042297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.042504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.042533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.042781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.042809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.043000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.043026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.043194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.043222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.043430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.043455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.043601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.043629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.043842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.043870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.044080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.044230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.044259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.044442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.044470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.044679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.044705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.044903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.044929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.045092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.045120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.045320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.045345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.045538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.045564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.045694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.045719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.045891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.045917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.046058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.046083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.046280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.046305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.046543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.046568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.046742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.046770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.046944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.046973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.047158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.047183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.047395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.047428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.047634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.047659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.047815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.047840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.048004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.048030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.048289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.048314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.048505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.048530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.048687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.048715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.048862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.049001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.049168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.049194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.049404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.049432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.049687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.049715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.049896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.049921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.050081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.050106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.050264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.050305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.050460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.050486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.050683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.050711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.050888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.050931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.051069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.051094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.051223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.051266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.051430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.051455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.051611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.693 [2024-07-15 16:41:48.051636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.693 qpair failed and we were unable to recover it. 00:25:08.693 [2024-07-15 16:41:48.051799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.051825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.051972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.051998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.052182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.052208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.052390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.052417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.052620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.052648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.052828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.052853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.052996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.053201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.053390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.053546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.053754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.053967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.053992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.054198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.054226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.054369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.054398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.054580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.054790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.054818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.055032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.055179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.055347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.055574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.055813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.055980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.056006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.056178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.056203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.056361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.056387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.056515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.056556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.056805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.056832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.057026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.057052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.057215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.057241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.057397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.057421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.057657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.057682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.057886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.057912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.058047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.058073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.058223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.058248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.058489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.058517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.058693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.058721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.058933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.058959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.059094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.059118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.059305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.059333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.059502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.059527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.059692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.059718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.059873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.059914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.060072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.060097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.060245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.060273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.060440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.060468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.060674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.060699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.060836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.060864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.061029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.061057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.061214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.061369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.061410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.061592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.061635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.061889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.061917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.062092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.062118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.694 [2024-07-15 16:41:48.062271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.694 [2024-07-15 16:41:48.062299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.694 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.062457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.062482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.062620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.062645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.062811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.062836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.063008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.063033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.063168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.063351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.063392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.063602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.063627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.063799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.063826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.064002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.064028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.064213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.064238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.064387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.064416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.064563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.064590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.064797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.064822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.065067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.065096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.065302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.065330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.065506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.065531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.065774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.065802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.066004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.066033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.066241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.066267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.066449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.066477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.066654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.066682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.066855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.066894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.067041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.067066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.067218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.067246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.067423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.067448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.067653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.067681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.067873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.067914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.068074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.068100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.068280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.068308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.068487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.068515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.068696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.068720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.068927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.068956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.069110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.069138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.069318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.069343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.069585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.069613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.069799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.069828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.070017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.070042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.070192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.070221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.070387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.070415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.070600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.695 [2024-07-15 16:41:48.070625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.695 qpair failed and we were unable to recover it. 00:25:08.695 [2024-07-15 16:41:48.070784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.070826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.070990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.071016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.071152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.071177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.071419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.071447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.071615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.071642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.071821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.071846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.072000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.072029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.072271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.072296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.072455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.072484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.072634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.072662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.072840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.072868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.073060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.073086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.073271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.073299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.073450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.073478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.073664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.073690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.073881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.073910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.074108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.074136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.074339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.074365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.074498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.074523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.074655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.074681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.074828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.074857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.075041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.075201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.075378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.075795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.075982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.076142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.076351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.076558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.076764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.076935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.076963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.077119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.077144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.077267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.077292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.077470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.077705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.077730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.077956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.077985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.078193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.078380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.078406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.078623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.078651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.078828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.078856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.079040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.079066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.079221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.079249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.079422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.079450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.079631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.079657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.079837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.079864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.080025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.080054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.080298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.080323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.080508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.080536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.080742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.080924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.080950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.081204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.081232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.081400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.081428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.081670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.081695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.081887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.696 [2024-07-15 16:41:48.081915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.696 qpair failed and we were unable to recover it. 00:25:08.696 [2024-07-15 16:41:48.082093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.082119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.082260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.082285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.082462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.082490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.082664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.082692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.082870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.082901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.083084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.083112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.083359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.083387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.083592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.083618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.083764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.083789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.083928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.083954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.084110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.084135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.084289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.084314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.084515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.084540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.084735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.084760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.084925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.084952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.085127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.085154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.085333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.085358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.085542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.085571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.085733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.085758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.085996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.086022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.086218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.086243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.086404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.086450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.086640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.086665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.086842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.086870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.087055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.087080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.087221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.087246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.087489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.087517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.087714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.087742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.087926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.087952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.088136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.088164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.088335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.088363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.088541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.088566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.088713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.088931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.089120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.089145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.089287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.089313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.089479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.089520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.089698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.089726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.089900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.089941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.090125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.090166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.090382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.090407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.090585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.090613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.090787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.697 [2024-07-15 16:41:48.090815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.697 qpair failed and we were unable to recover it. 00:25:08.697 [2024-07-15 16:41:48.090998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.091023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.091174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.091202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.091356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.091384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.091595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.091620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.091773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.091801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.091968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.092001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.092189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.092214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.092391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.092419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.092627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.092652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.092815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.092840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.093124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.093154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.093333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.093361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.093570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.093596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.093758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.093785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.093986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.094167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.094366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.094588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.094772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.094962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.094988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.095169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.095197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.095382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.095407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.095578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.095606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.095759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.095787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.095975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.096001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.096175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.096202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.096388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.096416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.096596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.096621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.096831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.096859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.097051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.097224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.097249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.097428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.097456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.097608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.097643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.097890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.097932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.098075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.098100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.098274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.098299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.098463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.098488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.098673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.098701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.098888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.098917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.099089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.099114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.099279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.099305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.099510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.099538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.099689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.099714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.099898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.099927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.100111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.100139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.100297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.100322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.100463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.100489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.100666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.100694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.100884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.100909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.101068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.101093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.101273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.698 [2024-07-15 16:41:48.101301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.698 qpair failed and we were unable to recover it. 00:25:08.698 [2024-07-15 16:41:48.101502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.101527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.101681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.101709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.101913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.101941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.102098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.102123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.102254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.102296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.102442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.102470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.102672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.102697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.102873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.102908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.103088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.103117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.103334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.103359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.103548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.103576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.103747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.103775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.103951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.103977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.104181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.104209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.104379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.104407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.104567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.104593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.104729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.104753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.104968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.104997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.105178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.105203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.105356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.105398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.105548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.105576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.105745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.105772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.105958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.105988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.106165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.106194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.106377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.106581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.106609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.106808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.106834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.106996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.107022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.107180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.107206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.107406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.107434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.107612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.107637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.107782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.107810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.107984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.108179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.108381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.108562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.108766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.108970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.108999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.109170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.109198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.109376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.109401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.109570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.109598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.109778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.109805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.109961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.109987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.110161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.110188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.110365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.110393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.110565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.110590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.110768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.110795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.110967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.110995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.111158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.111184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.111317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.111347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.111562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.111589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.699 qpair failed and we were unable to recover it. 00:25:08.699 [2024-07-15 16:41:48.111747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.699 [2024-07-15 16:41:48.111771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.111929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.112150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.112179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.112360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.112386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.112537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.112564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.112716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.112743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.112932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.112958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.113157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.113185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.113337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.113364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.113550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.113575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.113785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.113813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.113973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.114166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.114359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.114545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.114730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.114934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.114963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.115136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.115163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.115345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.115371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.115552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.115580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.115761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.115789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.115997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.116022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.116184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.116211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.116358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.116385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.116568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.116594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.116777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.116809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.116988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.117206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.117359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.117544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.117733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.117935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.118118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.118146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.118318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.118343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.118482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.118506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.118707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.118734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.118897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.118923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.119081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.119107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.119257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.119285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.119501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.119525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.119706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.119734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.119935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.119963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.120144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.120169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.120348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.120376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.120522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.120548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.120750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.120777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.120968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.120994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.121173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.121201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.121349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.121374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.121547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.121574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.121789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.121814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.121999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.122025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.122245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.122273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.122456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.122484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.122643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.122670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.700 qpair failed and we were unable to recover it. 00:25:08.700 [2024-07-15 16:41:48.122852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.700 [2024-07-15 16:41:48.122885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.123065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.123247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.123432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.123593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.123807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.123988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.124016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.124192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.124220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.124403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.124428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.124632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.124660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.124869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.124901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.125955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.125983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.126163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.126188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.126336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.126363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.126568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.126593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.126777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.126802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.126982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.127164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.127349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.127508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.127701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.127914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.127940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.128094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.128118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.128296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.128324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.128484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.128642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.128682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.128859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.128893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.129078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.129103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.129295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.129321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.129526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.129553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.129728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.129753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.129963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.129992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.130167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.130195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.130377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.130405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.130582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.130610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.130786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.130814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.130990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.131016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.131165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.131193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.131345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.131373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.131584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.131608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.131784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.131812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.131994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.132019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.132204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.132230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.132362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.132386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.132574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.132598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.132813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.132838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.133023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.133051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.133206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.133233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.133413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.133437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.133619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.133648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.701 qpair failed and we were unable to recover it. 00:25:08.701 [2024-07-15 16:41:48.133830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.701 [2024-07-15 16:41:48.133857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.134030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.134056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.134216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.134240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.134445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.134472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.134650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.134674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.134855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.134890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.135067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.135095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.135245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.135269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.135420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.135634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.135662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.135843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.135872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.136076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.136104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.136276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.136304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.136455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.136480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.136640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.136667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.136828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.137050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.137075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.137212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.137237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.137421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.137450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.137654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.137679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.137862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.137898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.138086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.138250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.138458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.138644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.138822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.138996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.139023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.139205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.139231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.139388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.139418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.139693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.139740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.139950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.139977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.140167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.140196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.140380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.140408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.140698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.140749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.140931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.140956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.141107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.141134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.141335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.141363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.141629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.141684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.141863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.141896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.142070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.142099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.142299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.142328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.142633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.142694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.142882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.142908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.143045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.143070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.143275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.143300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.143456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.143482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.143648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.143673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.143856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.143903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.144083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.144111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.702 qpair failed and we were unable to recover it. 00:25:08.702 [2024-07-15 16:41:48.144346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.702 [2024-07-15 16:41:48.144396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.144581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.144606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.144797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.144825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.144991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.145017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.145158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.145380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.145405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.145587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.145614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.145793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.145821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.145998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.146188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.146390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.146573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.146747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.146955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.147154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.147181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.147377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.147405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.147601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.147651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.147836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.147861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.148041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.148069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.148241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.148269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.148469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.148497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.148705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.148730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.148909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.148937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.149113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.149140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.149300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.149328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.149533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.149557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.149737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.149765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.149941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.149969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.150176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.150355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.150383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.150562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.150589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.150800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.150825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.150961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.150987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.151123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.151147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.151338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.151365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.151520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.151548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.151723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.151750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.151912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.152076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.152259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.152300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.152537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.152588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.152742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.152767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.152946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.152974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.153177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.153204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.153459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.153517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.153703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.153727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.153919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.153947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.154125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.154152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.154319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.154346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.154525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.154550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.154681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.154705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.154860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.154894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.155104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.155129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.155312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.155337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.155542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.155570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.703 qpair failed and we were unable to recover it. 00:25:08.703 [2024-07-15 16:41:48.155743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.703 [2024-07-15 16:41:48.155769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.155945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.156008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.156218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.156243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.156393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.156420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.156604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.156631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.156805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.157958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.157986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.158166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.158194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.158405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.158457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.158632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.158656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.158816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.158857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.159035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.159062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.159236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.159264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.159450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.159475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.159655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.159682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.159833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.159860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.160070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.160252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.160418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.160618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.160794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.160984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.161009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.161179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.161207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.161381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.161729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.161787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.161966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.161992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.162117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.162157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.162304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.162332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.162514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.162539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.162701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.162726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.162884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.162911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.163056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.163082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.163250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.163277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.163457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.163481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.163686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.163712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.163893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.163935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.164091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.164116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.164281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.164306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.164515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.164743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.164770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.164917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.164945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.165121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.165146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.165354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.165382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.165529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.165556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.165757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.165784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.166005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.166031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.166219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.166246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.166421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.166449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.166646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.166694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.166855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.166885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.704 [2024-07-15 16:41:48.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.704 [2024-07-15 16:41:48.167098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.704 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.167250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.167278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.167482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.167536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.167694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.167718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.167851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.167874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.168070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.168098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.168311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.168336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.168522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.168547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.168693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.168719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.168871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.168906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.169058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.169085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.169267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.169293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.169470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.169497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.169676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.169704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.169930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.169994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.170437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.170465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.170639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.170668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.170849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.170883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.171053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.171076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.171254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.171281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.171480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.171507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.171786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.171835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.172060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.172086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.172234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.172262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.172412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.172438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.172595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.172623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.172818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.172843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.173048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.173076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.173231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.173259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.173427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.173452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.173615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.173639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.173800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.173827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.174032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.174058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.174215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.174243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.174458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.174482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.174701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.174728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.174871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.174905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.175085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.175113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.175271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.175296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.175504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.175532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.175735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.175766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.175953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.176006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.176223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.176433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.176460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.176632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.176660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.176835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.176863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.177079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.177105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.177285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.177312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.177460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.177486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.705 qpair failed and we were unable to recover it. 00:25:08.705 [2024-07-15 16:41:48.177625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.705 [2024-07-15 16:41:48.177653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.177829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.177858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.178039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.178064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.178265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.178292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.178468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.178496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.178679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.178704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.178887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.178916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.179091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.179119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.179283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.179308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.179439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.179463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.179658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.179683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.179929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.179958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.180167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.180218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.180394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.180419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.180571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.180598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.180798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.180825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.180982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.181010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.181216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.181241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.181395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.181429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.181608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.181632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.181771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.181812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.181995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.182021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.182162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.182187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.182335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.182359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.182544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.182601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.182810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.182835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.182991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.183196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.183407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.183587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.183768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.183951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.183980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.184199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.184249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.184457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.184481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.184678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.184704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.184890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.184932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.185069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.185094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.185252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.185277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.185489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.185516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.185766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.185813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.185984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.186012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.186171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.186196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.186393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.186421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.186621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.186649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.186825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.186852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.187043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.187068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.187229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.187256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.187443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.187471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.187677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.187704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.187914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.187940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.188149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.188177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.188352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.188380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.188658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.188708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.188912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.188938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.189122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.189149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.706 [2024-07-15 16:41:48.189327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.706 [2024-07-15 16:41:48.189354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.706 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.189566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.189619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.189802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.189827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.189979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.190160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.190188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.190379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.190426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.190617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.190641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.190806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.190830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.190997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.191156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.191379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.191541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.191719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.191937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.191966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.192127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.192151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.192291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.192316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.192503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.192528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.192739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.192766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.192931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.192956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.193156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.193184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.193339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.193366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.193636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.193685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.193840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.193865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.194060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.194089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.194239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.194266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.194420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.194447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.194649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.194674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.194830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.194858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.195064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.195092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.195339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.195385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.195598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.195623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.195788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.195820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.195999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.196027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.196234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.196288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.196477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.196501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.196655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.196680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.196867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.196902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.197075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.197103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.197281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.197306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.197463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.197490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.197638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.197665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.197858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.197888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.198033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.198057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.198211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.198239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.198414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.198441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.198665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.198949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.198974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.199158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.199187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.199386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.199414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.199601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.199650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.199812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.200006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.200031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.200193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.200221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.200430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.200458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.200639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.200664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.707 [2024-07-15 16:41:48.200868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.707 [2024-07-15 16:41:48.200909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.707 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.201057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.201084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.201278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.201302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.201460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.201489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.201673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.201701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.201875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.201910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.202060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.202087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.202242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.202267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.202421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.202448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.202615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.202642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.202819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.202847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.203963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.203991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.204138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.204165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.204352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.204377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.204542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.204567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.204770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.204797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.204952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.204980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.205240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.205291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.205470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.205494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.205699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.205726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.205901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.205929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.206139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.206167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.206316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.206341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.206539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.206566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.206768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.206796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.206996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.207049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.207209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.207233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.207397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.207421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.207586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.207613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.207789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.207818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.208025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.208050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.208231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.208257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.208418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.208445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.208627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.208837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.208862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.209051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.209078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.209259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.209286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.209453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.209479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.209616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.209640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.209806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.209849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.210043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.210068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.210260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.210288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.210468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.210493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.210661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.210687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.210864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.210898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.708 qpair failed and we were unable to recover it. 00:25:08.708 [2024-07-15 16:41:48.211080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.708 [2024-07-15 16:41:48.211107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.211311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.211337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.211478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.211504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.211687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.211712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.211889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.211917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.212956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.212981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.213177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.213205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.213404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.213432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.213589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.213615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.213802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.213829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.213998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.214023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.214273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.214322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.214502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.214526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.214666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.214690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.214823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.214848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.215001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.215029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.215182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.215206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.215416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.215444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.215641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.215669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.215841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.215868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.216038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.216063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.216237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.216264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.216421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.216449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.216594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.216622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.216835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.216859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.217048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.217076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.217259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.217286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.217498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.217559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.217734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.217759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.217900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.217943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.218121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.218149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.218386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.218439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.218589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.218614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.218748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.218772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.218929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.218957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.219130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.219157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.219343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.219367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.219550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.219578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.219777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.219805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.219963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.219991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.220171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.220195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.220377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.220404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.220574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.220602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.220743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.220774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.220952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.220978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.221118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.221142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.221348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.221376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.221605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.221657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.221847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.221872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.222104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.222132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.709 qpair failed and we were unable to recover it. 00:25:08.709 [2024-07-15 16:41:48.222319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.709 [2024-07-15 16:41:48.222343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.222521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.222548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.222731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.222755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.222913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.222939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.223135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.223160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.223296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.223336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.223496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.223521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.223661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.223703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.223905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.223933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.224094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.224121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.224269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.224294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.224427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.224452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.224665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.224693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.224881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.224906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.225043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.225068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.225229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.225254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.225411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.225436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.225617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.225645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.225811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.225835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.226968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.226996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.227174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.227202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.227390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.227415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.227569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.227597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.227804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.227829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.228086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.228246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.228271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.228455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.228482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.228691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.228719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.228992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.229207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.229368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.229578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.229770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.229957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.229983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.230121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.230145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.230319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.230347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.230490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.230518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.230687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.230892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.230933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.231062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.231087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.231270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.231298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.231459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.231483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.231615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.231639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.231822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.231850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.232031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.232060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.232264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.232288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.232443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.232470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.232650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.232677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.232826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.232854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.233022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.710 [2024-07-15 16:41:48.233047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.710 qpair failed and we were unable to recover it. 00:25:08.710 [2024-07-15 16:41:48.233171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.233211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.233386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.233414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.233623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.233648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.233780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.233804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.233977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.234179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.234349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.234569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.234747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.234949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.234978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.235123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.235150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.235301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.235325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.235463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.235488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.235616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.235640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.235829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.235856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.236972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.236997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.237217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.237245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.237445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.237472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.237653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.237678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.237813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.237855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.238914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.238943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.239127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.239153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.239337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.239365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.239532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.239563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.239763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.239787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.239925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.239951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.240090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.240115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.240277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.240305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.240558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.240608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.240791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.240816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.240950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.241004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.241204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.241232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.241479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.241506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.241682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.241707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.241843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.241867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.242032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.711 [2024-07-15 16:41:48.242072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.711 qpair failed and we were unable to recover it. 00:25:08.711 [2024-07-15 16:41:48.242305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.242351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.242539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.242564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.242698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.242723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.242916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.242944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.243139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.243166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.243315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.243341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.243548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.243576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.243751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.243779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.243942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.243970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.244182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.244206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.244360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.244388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.244526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.244553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.244696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.244723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.244909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.244935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.245098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.245128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.245308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.245335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.245491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.245694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.245719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.245862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.245918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.246087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.246114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.246250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.246463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.246487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.246663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.246690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.246853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.246891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.247090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.247302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.247478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.247648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.247826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.247985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.248189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.248372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.248595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.248755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.248957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.248986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.249143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.249170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.249368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.249421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.249599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.249623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.249758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.249801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.249978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.250006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.250150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.712 [2024-07-15 16:41:48.250177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.712 qpair failed and we were unable to recover it. 00:25:08.712 [2024-07-15 16:41:48.250363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.250391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.250542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.250570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.250721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.250749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.250936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.250961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.251146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.251171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.251324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.251352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.251500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.251527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.251700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.251727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.251940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.251965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.252128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.252154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.252284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.252310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.252493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.252520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.252702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.252726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.252938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.252967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.253144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.253172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.253352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.253380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.253534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.253558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.253688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.253728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.253909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.253938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.254102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.254160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.254338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.254363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.254540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.254567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.254743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.254771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.254944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.254972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.255125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.255150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.255330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.255357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.255529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.255556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.255751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.255778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.255966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.255992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.256170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.256198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.256374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.256401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.256568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.256594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.256863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.256899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.257078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.257103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.257253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.257293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.257447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.257474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.257655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.257680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.257851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.257899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.258055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.258082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.258259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.258286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.258440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.258465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.258606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.258631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.258793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.258835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.259056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.713 [2024-07-15 16:41:48.259084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.713 qpair failed and we were unable to recover it. 00:25:08.713 [2024-07-15 16:41:48.259263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.259288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.259490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.259517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.259689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.259716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.259891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.259919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.260069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.260094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.260237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.260261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.260445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.260470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.260656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.260684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.260867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.260900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.261107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.261135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.261285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.261314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.261491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.261519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.261682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.261707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.261907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.261936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.262122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.262148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.262287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.262327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.262501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.262526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.262795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.262822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.263866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.263903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.714 [2024-07-15 16:41:48.264064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.714 [2024-07-15 16:41:48.264095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.714 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.264269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.264298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.264477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.264504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.264670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.264699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.264873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.265055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.265083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.265266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.265293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.265501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.265529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.265690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.265715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.265892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.265921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.266078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.266106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.266257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.266286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.266442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.266466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.266598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.266640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.266795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.266823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.267032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.267210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.267417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.267615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.267797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.267976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.268138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.268357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.268560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.268767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.268970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.997 [2024-07-15 16:41:48.268998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.997 qpair failed and we were unable to recover it. 00:25:08.997 [2024-07-15 16:41:48.269176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.269200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.269356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.269386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.269522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.269547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.269687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.269730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.269932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.269961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.270134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.270161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.270330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.270355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.270491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.270534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.270711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.270738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.270887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.270915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.271073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.271097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.271279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.271306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.271564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.271592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.271765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.271793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.271957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.271982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.272117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.272141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.272301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.272328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.272515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.272543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.272755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.272779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.272968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.272995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.273178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.273202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.273340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.273382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.273535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.273560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.273701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.273725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.273857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.273890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.274081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.274255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.274482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.274650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.274823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.274991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.275140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.275357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.275550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.275746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.275910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.275936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.276099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.276318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.276346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.276535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.276559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.276741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.276768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.276949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.276978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.277120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.277148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.277329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.277354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.277486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.277647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.277855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.277891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.278055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.278079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.278220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.278244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.278404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.278429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.278605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.278652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.278810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.278835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.279000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.279029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.279178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.279205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.279351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.279379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.998 qpair failed and we were unable to recover it. 00:25:08.998 [2024-07-15 16:41:48.279592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.998 [2024-07-15 16:41:48.279617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.279771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.279798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.279949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.279977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.280220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.280248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.280402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.280427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.280555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.280745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.280774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.280937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.280963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.281128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.281153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.281325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.281354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.281498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.281527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.281676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.281704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.281852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.281884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.282940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.282966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.283120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.283145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.283302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.283326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.283469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.283511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.283649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.283677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.283855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.283898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.284082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.284106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.284233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.284275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.284446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.284474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.284614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.284641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.284827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.284851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.285012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.285041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.285188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.285216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.285377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.285405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.285611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.285636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.285855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.285886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.286045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.286229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.286409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.286605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.286806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.286983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.287011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.287163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.287188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.287360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.287387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.287532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.287565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.287765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.287793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.287999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.288207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.288408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.288618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.288770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.288933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.288977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.289127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.289155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.289358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.289386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.289532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.289557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.289714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.999 [2024-07-15 16:41:48.289757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:08.999 qpair failed and we were unable to recover it. 00:25:08.999 [2024-07-15 16:41:48.289906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.289936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.290144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.290169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.290303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.290328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.290533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.290561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.290708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.290736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.290937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.290966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.291115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.291141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.291302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.291328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.291506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.291533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.291735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.291760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.291913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.291939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.292069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.292094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.292253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.292278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.292458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.292486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.292643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.292668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.292817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.292846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.293015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.293041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.293240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.293475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.293500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.293671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.293699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.293894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.293923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.294107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.294132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.294294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.294319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.294469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.294498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.294644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.294672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.294874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.294922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.295081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.295106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.295270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.295295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.295470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.295498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.295697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.295745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.295921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.295947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.296128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.296155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.296325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.296354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.296541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.296569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.296755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.296780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.296945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.296971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.297104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.297129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.297274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.297302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.297484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.297509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.297720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.297747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.297947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.297976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.298156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.298184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.298338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.298363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.298499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.298524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.298664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.298690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.298891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.298920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.000 [2024-07-15 16:41:48.299101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.000 [2024-07-15 16:41:48.299126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.000 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.299306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.299501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.299529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.299721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.299746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.299910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.299936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.300091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.300120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.300318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.300346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.300489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.300516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.300707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.300732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.300889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.300914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.301069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.301097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.301274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.301301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.301493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.301518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.301662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.301688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.301833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.301860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.302962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.302988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.303196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.303224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.303374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.303400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.303579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.303854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.303888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.304086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.304114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.304263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.304464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.304491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.304678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.304702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.304841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.304867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.305065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.305093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.305307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.305335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.305506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.305530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.305661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.305702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.305858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.305906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.306087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.306115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.306290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.306315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.306488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.306520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.306671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.306699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.306909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.306935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.307111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.307135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.307291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.307318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.307456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.307483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.307658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.307685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.307870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.307901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.308090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.308275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.308444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.308622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.308829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.308976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.309005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.309154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.309182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.309372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.309398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.309587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.309615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.309788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.001 [2024-07-15 16:41:48.309816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.001 qpair failed and we were unable to recover it. 00:25:09.001 [2024-07-15 16:41:48.309991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.310202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.310397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.310578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.310785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.310971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.311154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.311179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.311315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.311339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.311525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.311553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.311740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.311769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.311901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.311926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.312080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.312108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.312259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.312287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.312497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.312522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.312672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.312698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.312868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.312902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.313084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.313245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.313449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.313662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.313823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.313991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.314200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.314369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.314560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.314733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.314894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.314936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.315112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.315139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.315311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.315338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.315486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.315510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.315664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.315707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.315855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.315887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.316063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.316089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.316294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.316318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.316497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.316688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.316715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.316864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.316898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.317065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.317090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.317269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.317296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.317496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.317522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.317697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.317724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.317893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.317920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.318110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.318137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.318343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.318368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.318516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.318543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.318700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.318725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.318932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.318960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.319105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.319132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.319330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.319354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.319490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.319515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.319673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.319700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.319905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.319930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.320060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.320085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.320244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.320270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.320430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.320455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.002 qpair failed and we were unable to recover it. 00:25:09.002 [2024-07-15 16:41:48.320595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.002 [2024-07-15 16:41:48.320620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.320790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.320816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.320965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.320992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.321132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.321157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.321318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.321343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.321499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.321525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.321691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.321715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.321930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.321956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.322123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.322149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.322288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.322315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.322510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.322535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.322687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.322712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.322891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.322917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.323106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.323131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.323296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.323321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.323463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.323487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.323626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.323827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.323853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.324915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.324941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.325122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.325285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.325477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.325660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.325817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.325997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.326183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.326369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.326553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.326738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.326904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.326930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.327935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.327961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.328147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.328341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.328531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.328681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.328841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.328981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.329170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.329378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.329589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.329744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.329946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.329971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.330131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.003 [2024-07-15 16:41:48.330156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.003 qpair failed and we were unable to recover it. 00:25:09.003 [2024-07-15 16:41:48.330339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.330364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.330525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.330550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.330686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.330711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.330895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.330921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.331123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.331287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.331443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.331657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.331839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.331979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.332144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.332329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.332516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.332704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.332858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.332889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.333880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.333906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.334966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.334992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.335335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.335496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.335700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.335846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.335992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.336179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.336337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.336500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.336690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.336882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.336907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.337917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.337942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.338080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.338105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.338265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.338290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.338447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.338471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.338605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.004 [2024-07-15 16:41:48.338629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.004 qpair failed and we were unable to recover it. 00:25:09.004 [2024-07-15 16:41:48.338781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.338805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.338945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.338971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.339131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.339342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.339509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.339671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.339830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.339983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.340158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.340312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.340503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.340685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.340845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.341956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.341982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.342943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.342969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.343156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.343336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.343513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.343670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.343845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.343993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.344186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.344375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.344533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.344713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.344873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.344913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.345871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.345902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.346962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.346986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.347120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.347145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.347347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.347372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.347556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.347582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.347710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.347734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.347900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.347926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.348085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.005 [2024-07-15 16:41:48.348110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.005 qpair failed and we were unable to recover it. 00:25:09.005 [2024-07-15 16:41:48.348269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.348293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.348478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.348503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.348645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.348670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.348812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.348837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.348988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.349176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.349365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.349525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.349712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.349923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.349948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.350105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.350130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.350312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.350337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.350498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.350661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.350685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.350861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.350926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.351081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.351107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.351285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.351318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.351472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.351496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.351629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.351655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.351904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.351930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.352106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.352134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.352280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.352308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.352460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.352487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.352665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.352690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.352830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.352855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.353049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.353235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.353407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.353615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.353836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.353975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.354144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.354349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.354576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.354749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.354955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.354981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.355119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.355144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.355297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.355338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.355537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.355565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.355719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.355743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.355884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.355910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.356103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.356131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.356305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.356333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.356497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.356522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.356734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.356762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.356933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.356962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.357116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.357144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.357325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.357350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.357502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.357529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.357709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.357737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.357960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.357985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.358154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.358179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.358344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.358370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.358520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.358547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.358731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.006 [2024-07-15 16:41:48.358756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.006 qpair failed and we were unable to recover it. 00:25:09.006 [2024-07-15 16:41:48.358919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.358945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.359106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.359132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.359313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.359342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.359547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.359597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.359777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.359802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.359988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.360016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.360200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.360227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.360441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.360490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.360671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.360696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.360848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.360883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.361039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.361066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.361246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.361273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.361425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.361451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.361610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.361652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.361849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.361883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.362034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.362062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.362254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.362279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.362437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.362461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.362597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.362622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.362835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.362862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.363082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.363107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.363290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.363317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.363470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.363498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.363637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.363665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.363860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.363892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.364036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.364060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.364209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.364234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.364460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.364484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.364643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.364667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.364845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.364883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.365067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.365269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.365475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.365639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.365806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.365997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.366165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.366344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.366541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.366706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.366891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.366917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.367051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.367076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.367264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.367292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.367502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.367531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.367681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.367710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.367884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.368059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.368083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.007 [2024-07-15 16:41:48.368305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.007 [2024-07-15 16:41:48.368355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.007 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.368577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.368601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.368727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.368752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.368889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.368916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.369057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.369081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.369219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.369244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.369380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.369423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.369627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.369654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.369805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.369832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.370044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.370074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.370254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.370278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.370426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.370451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.370645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.370688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.370920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.370945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.371104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.371128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.371318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.371342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.371539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.371563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.371688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.371713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.371896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.371937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.372129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.372373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.372422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.372630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.372658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.372836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.372864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.373082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.373107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.373292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.373317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.373497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.373522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.373698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.373725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.373940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.373965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.374129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.374154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.374285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.374309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.374512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.374539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.374690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.374718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.374939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.374965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.375103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.375128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.375291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.375333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.375502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.375529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.375706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.375738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.375914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.375940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.376073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.376098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.376261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.376285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.376472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.376732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.376760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.376973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.376999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.377175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.377376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.377403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.377585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.377628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.377816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.377844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.378032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.378192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.378401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.378574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.378803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.378985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.379011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.379170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.379195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.379372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.008 [2024-07-15 16:41:48.379400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.008 qpair failed and we were unable to recover it. 00:25:09.008 [2024-07-15 16:41:48.379573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.379601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.379771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.379799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.379945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.379971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.380106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.380132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.380283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.380308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.380515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.380570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.380810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.380838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.381012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.381037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.381218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.381246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.381396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.381424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.381615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.381643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.381817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.381845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.382046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.382072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.382231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.382256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.382423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.382449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.382626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.382653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.382830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.382858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.383076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.383102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.383286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.383311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.383490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.383518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.383703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.383731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.383954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.383979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.384145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.384171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.384320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.384345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.384527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.384554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.384728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.384756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.384907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.384933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.385089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.385114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.385250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.385292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.385488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.385543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.385755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.385783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.385950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.385976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.386135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.386177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.386404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.386456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.386611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.386651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.386830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.386858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.387022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.387047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.387227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.387255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.387437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.387479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.387661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.387689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.387833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.387861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.388046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.388224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.388427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.388658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.388827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.388989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.389014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.389146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.389187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.389353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.389380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.389570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.389603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.389783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.389811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.389997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.390023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.390200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.390228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.390397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.390425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.009 [2024-07-15 16:41:48.390604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.009 [2024-07-15 16:41:48.390629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.009 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.390836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.390863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.391010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.391038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.391225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.391249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.391411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.391436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.391615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.391643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.391808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.391836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.392005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.392033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.392249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.392274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.392463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.392491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.392667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.392695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.392910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.392936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.393067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.393093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.393229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.393269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.393447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.393475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.393645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.393698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.393895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.393921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.394098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.394125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.394272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.394299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.394494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.394740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.394765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.394915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.394943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.395123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.395154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.395318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.395345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.395528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.395553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.395725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.395754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.395931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.395958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.396147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.396174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.396387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.396412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.396564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.396590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.396808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.396833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.396976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.397162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.397326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.397582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.397747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.397943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.397968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.398179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.398207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.398382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.398410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.398641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.398690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.398883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.398909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.399075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.399276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.399494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.399676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.399829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.399999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.400024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.400191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.400216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.010 qpair failed and we were unable to recover it. 00:25:09.010 [2024-07-15 16:41:48.400374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.010 [2024-07-15 16:41:48.400399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.400583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.400613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.400769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.400796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.400945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.400971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.401109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.401133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.401311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.401338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.401490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.401517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.401670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.401698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.401903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.401934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.402115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.402142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.402318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.402345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.402516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.402579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.402789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.402815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.402953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.402979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.403138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.403162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.403343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.403371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.403531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.403763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.403791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.403969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.403994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.404179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.404204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.404342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.404367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.404523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.404548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.404725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.404753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.404933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.404961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.405166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.405191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.405373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.405401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.405573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.405601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.405771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.405798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.405980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.406169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.406197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.406374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.406402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.406569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.406621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.406796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.406822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.407013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.407042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.407247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.407274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.407554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.407605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.407787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.407812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.407958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.407986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.408161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.408190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.408389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.408440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.408628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.408858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.408893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.409068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.409100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.409279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.409306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.409492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.409516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.409690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.409718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.409894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.409926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.410098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.410125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.410333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.410357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.410532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.410559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.410731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.410759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.410912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.410940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.411118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.411319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.411346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.011 qpair failed and we were unable to recover it. 00:25:09.011 [2024-07-15 16:41:48.411495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.011 [2024-07-15 16:41:48.411523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.411724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.411752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.411932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.411958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.412121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.412146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.412326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.412353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.412595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.412641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.412821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.412846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.413010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.413039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.413222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.413247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.413461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.413488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.413658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.413682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.413882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.413911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.414047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.414075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.414285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.414344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.414525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.414550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.414729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.414949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.414977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.415178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.415231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.415414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.415439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.415619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.415647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.415830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.415858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.416036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.416065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.416228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.416252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.416463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.416492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.416664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.416692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.416870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.417088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.417113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.417273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.417301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.417469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.417496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.417719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.417771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.417980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.418005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.418161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.418187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.418372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.418399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.418600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.418627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.418812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.418837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.419035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.419061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.419245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.419271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.419410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.419452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.419660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.419685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.419869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.419904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.420052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.420080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.420264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.420291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.420474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.420503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.420711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.420739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.420914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.420951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.421124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.421151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.421311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.421337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.421478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.421503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.421659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.421684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.421874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.421906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.422044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.422069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.422251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.422279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.422429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.422457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.422658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.422686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.012 qpair failed and we were unable to recover it. 00:25:09.012 [2024-07-15 16:41:48.422831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.012 [2024-07-15 16:41:48.422856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.423037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.423065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.423269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.423297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.423488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.423538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.423724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.423749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.423924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.423953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.424101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.424129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.424299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.424327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.424536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.424561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.424712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.424740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.424942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.424970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.425144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.425172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.425346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.425371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.425576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.425604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.425782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.425810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.426014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.426042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.426261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.426286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.426438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.426465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.426667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.426692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.426821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.426848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.427017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.427043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.427219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.427247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.427452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.427480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.427682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.427709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.427886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.427911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.428090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.428119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.428294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.428321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.428485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.428511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.428681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.428706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.428874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.428907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.429038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.429081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.429216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.429243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.429436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.429461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.429625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.429651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.429829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.429857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.430030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.430055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.430215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.430240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.430445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.430472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.430661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.430686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.430924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.431149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.431174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.431355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.431384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.431551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.431579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.431756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.431785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.431964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.431990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.432168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.432195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.432444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.432472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.432699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.432724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.013 [2024-07-15 16:41:48.432921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.013 [2024-07-15 16:41:48.432947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.013 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.433101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.433312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.433340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.433638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.433697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.433897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.434108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.434136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.434337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.434366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.434604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.434654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.434861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.434900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.435057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.435082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.435214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.435239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.435373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.435398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.435561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.435586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.435766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.435794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.436031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.436215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.436419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.436607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.436816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.436992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.437184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.437372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.437599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.437787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.437973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.437998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.438144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.438191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.438361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.438389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.438569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.438594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.438729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.438753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.438933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.438959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.439096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.439138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.439310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.439338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.439550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.439574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.439729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.439757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.439946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.439975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.440155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.440186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.440336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.440360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.440490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.440530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.440682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.440710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.440888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.440916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.441101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.441126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.441324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.441351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.441529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.441556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.441738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.441767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.441946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.441972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.442128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.442155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.442301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.442328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.442514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.442539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.442733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.442758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.442939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.442968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.443122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.443149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.443358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.443382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.443542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.014 [2024-07-15 16:41:48.443568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.014 qpair failed and we were unable to recover it. 00:25:09.014 [2024-07-15 16:41:48.443727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.443752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.443919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.443948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.444127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.444154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.444365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.444390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.444574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.444601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.444807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.444837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.444994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.445020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.445185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.445209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.445391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.445418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.445598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.445625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.445802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.445830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.446003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.446028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.446206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.446233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.446425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.446450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.446637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.446663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.446798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.446839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.447022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.447047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.447198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.447227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.447540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.447604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.447809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.447836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.447999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.448025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.448226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.448253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.448553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.448610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.448792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.448820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.448991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.449017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.449196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.449231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.449519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.449569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.449775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.449803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.450017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.450043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.450225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.450461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.450509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.450719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.450746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.450934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.450959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.451097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.451123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.451308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.451333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.451581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.451631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.451832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.451860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.452029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.452055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.452261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.452317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.452467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.452492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.452669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.452696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.452899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.452940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.453077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.453103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.453245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.453270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.453477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.453504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.453651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.453678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.453833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.453860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.454025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.454050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.454229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.454263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.454442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.454470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.454624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.454657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.015 [2024-07-15 16:41:48.454867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.015 [2024-07-15 16:41:48.454898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.015 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.455084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.455109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.455284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.455311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.455654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.455708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.455897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.455922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.456055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.456080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.456241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.456269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.456456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.456497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.456770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.456820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.457002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.457027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.457239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.457266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.457502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.457551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.457710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.457751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.457948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.457974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.458132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.458173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.458342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.458368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.458571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.458598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.458748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.458776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.458965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.458991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.459151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.459175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.459330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.459354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.459536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.459561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.459728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.459756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.459924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.459949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.460113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.460138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.460315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.460343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.460538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.460570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.460760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.460786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.460944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.460969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.461128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.461153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.461311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.461353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.461556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.461584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.461784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.461811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.462969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.463134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.463174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.463385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.463410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.463562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.463590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.463763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.463790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.463949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.463974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.464135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.464160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.464378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.464406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.464581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.464608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.464786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.464813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.465089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.465115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.465272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.465300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.465475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.465502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.465706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.465734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.465935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.465961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.466147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.466178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.466346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.466371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.466510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.466551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.466758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.466782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.466988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.467016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.467202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.467227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.467391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.467417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.467604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.467629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.467840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.467864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.468064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.468089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.468223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.468247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.468409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.468434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.468589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.468617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.468798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.468825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.469038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.469064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.469203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.016 [2024-07-15 16:41:48.469233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.016 qpair failed and we were unable to recover it. 00:25:09.016 [2024-07-15 16:41:48.469421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.469446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.469615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.469805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.469830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.469993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.470019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.470174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.470201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.470379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.470407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.470626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.470670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.470929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.470954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.471140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.471180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.471340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.471367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.471579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.471605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.471762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.471786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.471956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.471981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.472120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.472144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.472306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.472330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.472515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.472540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.472752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.472781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.472938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.472967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.473115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.473142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.473316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.473341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.473521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.473549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.473750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.473778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.473953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.473982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.474145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.474172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.474310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.474351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.474490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.474522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.474718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.474742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.474913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.474938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.475098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.475272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.475300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.475519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.475569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.475774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.475798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.475957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.475985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.476155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.476182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.476353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.476413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.476599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.476624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.476775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.476803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.477033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.477061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.477240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.477267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.477458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.477483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.477689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.477718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.477891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.477919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.478104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.478308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.478468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.478653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.478826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.478983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.479008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.479193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.479219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.479387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.479415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.479590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.479617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.479791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.479819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.479983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.480146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.480340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.480577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.480749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.480940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.480969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.481150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.481176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.481337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.481361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.481494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.481536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.481690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.481718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.481869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.481917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.482066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.482090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.482227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.482252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.482386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.482410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.482653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.482704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.482857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.482888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.483027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.483052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.483191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.483217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.483404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.483431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.483638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.483663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.483813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.483841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.484007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.484033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.484164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.484205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.484388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.017 [2024-07-15 16:41:48.484413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.017 qpair failed and we were unable to recover it. 00:25:09.017 [2024-07-15 16:41:48.484563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.484590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.484737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.484766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.484946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.484975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.485178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.485207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.485388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.485416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.485587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.485615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.485756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.485784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.485987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.486171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.486354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.486554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.486758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.486960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.486988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.487158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.487186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.487334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.487361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.487545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.487570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.487729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.487756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.487906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.487950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.488085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.488110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.488280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.488305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.488450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.488478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.488651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.488679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.488851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.488892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.489047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.489072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.489199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.489241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.489393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.489423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.489620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.489645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.489827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.489855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.490023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.490049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.490223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.490252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.490402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.490429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.490589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.490614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.490805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.490833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.491025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.491229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.491438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.491593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.491802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.491994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.492020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.492283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.492338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.492517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.492559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.492692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.492717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.492905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.492934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.493096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.493121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.493331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.493359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.493539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.493567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.493747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.493775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.493965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.493990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.494165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.494190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.494348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.494377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.494574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.494601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.494752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.494778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.494991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.495167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.495374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.495561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.495775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.495949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.495978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.496129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.496157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.496312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.496337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.496493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.496534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.496692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.496719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.496867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.496903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.497089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.497114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.497297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.497324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.497496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.497524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.497668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.497696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.497893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.497919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.498099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.498127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.498267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.498294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.498496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.498523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.498685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.498715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.498914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.498941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.499081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.499285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.499313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.499501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.499525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.499698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.499725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.018 qpair failed and we were unable to recover it. 00:25:09.018 [2024-07-15 16:41:48.499910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.018 [2024-07-15 16:41:48.499937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.500075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.500115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.500319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.500344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.500494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.500521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.500665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.500693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.500845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.500873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.501060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.501085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.501265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.501292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.501476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.501505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.501685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.501710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.501887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.501913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.502101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.502128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.502283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.502310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.502505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.502563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.502721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.502745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.502928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.502957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.503099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.503127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.503335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.503362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.503533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.503558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.503742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.503770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.503948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.503976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.504132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.504343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.504367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.504518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.504547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.504740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.504766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.504926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.504952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.505124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.505148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.505305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.505333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.505514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.505541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.505713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.505740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.505923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.505947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.506108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.506134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.506308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.506335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.506549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.506600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.506779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.506803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.506998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.507026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.507210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.507237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.507436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.507484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.507632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.507657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.507836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.507864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.508081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.508109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.508285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.508311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.508464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.508488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.508645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.508687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.508867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.508904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.509064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.509091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.509258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.509282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.509464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.509493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.509666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.509699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.509856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.509893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.510057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.510081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.510220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.510261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.510438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.510465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.510638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.510666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.510867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.510908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.511058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.511085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.511261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.511288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.511445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.511469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.511614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.511639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.511819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.511846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.512070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.512261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.512444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.512615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.512821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.512987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.019 [2024-07-15 16:41:48.513013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.019 qpair failed and we were unable to recover it. 00:25:09.019 [2024-07-15 16:41:48.513174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.513199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.513330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.513355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.513519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.513544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.513725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.513753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.513912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.513938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.514071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.514112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.514301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.514329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.514498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.514526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.514683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.514709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.514841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.514892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.515071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.515275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.515303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.515489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.515514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.515703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.515731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.515874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.515910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.516112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.516290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.516315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.516461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.516489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.516629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.516660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.516835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.516863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.517054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.517079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.517234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.517262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.517428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.517456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.517634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.517667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.517812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.517837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.518015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.518043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.518213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.518241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.518440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.518465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.518627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.518653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.518803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.518831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.520012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.520046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.520272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.520321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.520530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.520556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.520717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.520745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.520959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.520985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.521177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.521203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.521365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.521390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.521575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.521603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.521760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.521788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.521992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.522175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.522337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.522554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.522731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.522923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.522949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.523082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.523125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.523310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.523338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.523524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.523549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.523744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.523769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.523936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.523965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.524140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.524172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.524427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.524455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.524614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.524641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.524793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.524836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.525034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.525060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.525264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.525320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.525469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.525494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.525656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.525688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.525922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.525953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.526143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.020 [2024-07-15 16:41:48.526172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.020 qpair failed and we were unable to recover it. 00:25:09.020 [2024-07-15 16:41:48.526344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.526370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.526518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.526543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.526694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.526723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.526868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.526917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.527104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.527130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.527335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.527364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.527513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.527541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.527714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.527741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.527935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.527961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.528131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.528173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.528379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.528404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.528587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.528616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.528768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.528795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.528939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.528982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.529164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.529193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.529347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.529375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.529556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.529581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.529750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.529783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.529939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.529968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.530113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.530140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.530296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.530320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.530494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.530522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.530674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.530703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.530850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.531935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.531961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.532121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.532148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.532330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.532358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.532548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.532580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.532731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.532759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.532941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.532969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.533137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.533164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.533349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.533374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.533516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.533544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.533714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.533741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.533925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.533954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.534108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.534134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.534295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.534320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.534511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.534539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.534713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.534741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.534905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.021 [2024-07-15 16:41:48.534931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.021 qpair failed and we were unable to recover it. 00:25:09.021 [2024-07-15 16:41:48.535119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.535146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.535291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.535318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.535496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.535679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.535704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.535868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.536056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.536083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.536286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.536314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.536499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.536524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.536734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.536909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.536937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.537115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.537142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.537315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.537340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.537522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.537550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.537752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.537780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.537968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.537994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.538129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.538154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.538307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.538343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.538526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.538555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.538705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.538732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.538911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.538943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.539118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.539147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.539283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.539311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.539515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.539544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.539743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.539767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.539925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.539954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.540100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.540128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.540329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.540378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.540589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.540615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.540777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.540803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.540952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.540980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.541154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.541182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.541387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.541413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.541568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.541595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.541802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.541829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.542922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.542948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.543124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.543155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.543305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.543332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.022 qpair failed and we were unable to recover it. 00:25:09.022 [2024-07-15 16:41:48.543528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.022 [2024-07-15 16:41:48.543576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.543771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.543797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.543977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.544006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.544150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.544177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.544373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.544400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.544583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.544608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.544814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.544843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.545038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.545224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.545471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.545628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.545829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.545993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.546021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.546206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.546232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.546412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.546440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.546618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.546646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.546859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.546892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.547059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.547085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.547226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.547250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.547391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.547417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.547621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.547648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.547833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.547858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.548022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.548047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.548221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.548249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.548493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.548542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.548711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.548740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.548897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.548923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.549082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.549109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.549259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.549287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.549470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.549495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.549648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.549691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.549894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.549922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.550074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.550101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.550315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.550340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.550516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.550543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.550696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.550723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.550906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.550933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.551097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.551121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.551284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.551309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.551530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.551559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.551757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.551785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.551965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.551990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.552155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.552183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.552361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.552389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.552559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.552587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.552743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.552767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.552902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.552928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.553148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.023 [2024-07-15 16:41:48.553177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.023 qpair failed and we were unable to recover it. 00:25:09.023 [2024-07-15 16:41:48.553349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.553376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.553558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.553583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.553714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.553738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.553898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.553935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.554135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.554166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.554374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.554399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.554575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.554602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.554771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.554798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.554975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.555003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.555163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.555188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.555391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.555419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.555569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.555595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.555797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.555824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.556026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.556052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.556210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.556235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.556418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.556446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.556669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.556721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.556909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.556945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.557120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.557144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.557306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.557333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.557512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.557540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.557704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.557729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.557914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.557943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.558115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.558142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.558369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.558415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.558579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.558604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.558767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.558792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.558975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.559000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.559191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.559246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.559426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.559451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.559585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.559610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.559809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.559836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.560075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.560101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.560285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.560310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.560455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.560482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.560686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.560714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.560916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.560948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.561136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.561161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.561311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.561339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.561477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.561504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.561698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.561731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.561957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.561983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.562134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.562162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.562301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.562329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.562528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.562556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.562742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.562770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.562934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.562962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.563138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.563166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.563389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.563444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.563661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.024 [2024-07-15 16:41:48.563686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.024 qpair failed and we were unable to recover it. 00:25:09.024 [2024-07-15 16:41:48.563827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.564033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.564212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.564241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.564395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.564420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.564623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.564651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.564800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.564828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.565027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.565206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.565417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.565650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.565823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.565999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.566024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.566231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.566259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.566429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.566458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.566662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.566711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.566928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.566954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.567114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.567142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.567292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.567320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.567521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.567550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.567723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.567748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.567887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.567931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.568112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.568140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.568311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.568343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.568497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.568522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.568684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.568710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.568888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.568918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.569426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.569456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.569672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.569698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.569902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.569938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.570131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.570159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.570340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.570367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.570541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.570566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.570746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.570775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.570952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.570981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.571162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.571191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.571399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.571424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.571593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.571796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.571824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.025 [2024-07-15 16:41:48.572960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.025 [2024-07-15 16:41:48.572988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.025 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.573122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.573148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.573284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.573309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.573514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.573542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.573759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.573785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.573945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.573974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.574130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.574163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.574342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.574368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.574549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.574573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.574728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.574756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.574908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.574936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.575111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.575291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.575450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.575611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.575804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.575990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.576152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.576369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.576540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.576750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.576933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.576962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.577105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.577132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.577335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.577362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.577536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.577560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.577745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.577773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.577949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.577979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.578159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.578187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.578388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.578412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.578565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.578592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.578761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.578789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.578944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.578973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.579126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.579150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.579347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.579374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.579527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.579555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.579725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.579753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.579945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.579971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.580158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.580185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.580333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.580361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.580562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.580616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.580763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.580788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.580916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.580957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.581104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.581132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.581308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.581336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.307 qpair failed and we were unable to recover it. 00:25:09.307 [2024-07-15 16:41:48.581514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.307 [2024-07-15 16:41:48.581539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.581707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.581734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.581947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.581976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.582156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.582200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.582374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.582399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.582576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.582603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.582743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.582771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.582963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.582988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.583127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.583153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.583329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.583357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.583545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.583573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.583719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.583746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.583908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.583940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.584066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.584091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.584258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.584286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.584489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.584517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.584693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.584719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.584912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.584947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.585104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.585131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.585297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.585325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.585498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.585523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.585697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.585724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.585901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.585929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.586089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.586114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.586285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.586310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.586512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.586539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.586682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.586709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.586914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.586946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.587127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.587152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.587331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.587359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.587550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.587580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.587714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.587738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.587964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.587989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.588178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.588205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.588362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.588390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.588537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.588565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.588741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.588765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.588947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.588975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.589178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.589206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.589406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.589453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.589642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.589853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.589896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.590047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.590075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.590306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.590356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.590580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.590605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.590782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.590809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.590958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.590986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.591218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.591439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.591464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.591596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.591639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.591825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.591849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.592016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.592040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.592171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.592196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.592330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.592370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.592544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.592570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.308 qpair failed and we were unable to recover it. 00:25:09.308 [2024-07-15 16:41:48.592760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.308 [2024-07-15 16:41:48.592784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.592945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.592971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.593113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.593144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.593316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.593342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.593509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.593537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.593717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.593742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.593926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.593954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.594106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.594133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.594306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.594333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.594514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.594539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.594671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.594713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.594858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.594893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.595079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.595257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.595439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.595594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.595807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.595975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.596002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.596185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.596211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.596389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.596416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.596635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.596660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.596788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.596814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.596993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.597173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.597373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.597602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.597761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.597937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.597965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.598112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.598140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.598319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.598344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.598489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.598516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.598691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.598718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.598959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.598985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.599143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.599168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.599372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.599574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.599602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.599753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.599780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.599959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.600196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.600224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.600371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.600399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.600617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.600650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.600847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.600872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.601076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.601103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.601289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.601317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.601555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.601606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.601760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.601785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.601952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.601977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.602145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.602172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.602371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.602400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.602582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.602607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.602756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.602784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.602954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.602982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.603161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.603185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.603347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.603372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.603557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.603585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.603731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.309 [2024-07-15 16:41:48.603758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.309 qpair failed and we were unable to recover it. 00:25:09.309 [2024-07-15 16:41:48.603901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.603935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.604124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.604150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.604360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.604388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.604558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.604586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.604765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.604792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.604999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.605025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.605176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.605204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.605405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.605433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.605652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.605882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.605908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.606111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.606139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.606313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.606341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.606513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.606541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.606745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.606951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.606983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.607131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.607159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.607305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.607332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.607542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.607566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.607700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.607725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.607859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.607891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.608084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.608112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.608289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.608315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.608442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.608483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.608634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.608662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.608832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.608860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.609035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.609060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.609185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.609210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.609403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.609431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.609639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.609695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.609901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.609943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.610074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.610100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.610265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.610293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.610430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.610458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.610642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.610667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.610822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.610847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.611008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.611033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.611163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.611188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.611375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.611399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.611553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.611578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.611753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.611782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.612013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.612063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.612245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.612274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.612460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.612485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.612649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.612676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.612864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.612895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.613027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.613052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.310 qpair failed and we were unable to recover it. 00:25:09.310 [2024-07-15 16:41:48.613184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.310 [2024-07-15 16:41:48.613225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.613374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.613401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.613588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.613613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.613776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.613802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.613980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.614008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.614183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.614211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.614377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.614402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.614561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.614586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.614765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.614793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.614971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.615140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.615344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.615543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.615745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.615952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.615980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.616135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.616160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.616363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.616391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.616583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.616609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.616738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.616763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.616950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.616976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.617149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.617176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.617346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.617374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.617546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.617595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.617753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.617777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.617908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.617949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.618131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.618159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.618359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.618387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.618568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.618592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.618801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.618828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.619048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.619237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.619420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.619630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.619813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.619996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.620025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.620207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.620232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.620417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.620445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.620644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.620670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.620841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.620868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.621029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.621054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.621227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.621270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.621442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.621470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.621680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.621722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.621925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.621951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.622094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.622119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.622280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.622322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.622518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.622563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.622771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.622797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.622977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.623005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.623184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.623212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.623435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.623485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.623641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.623666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.623837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.623864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.624071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.624099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.311 [2024-07-15 16:41:48.624282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.311 [2024-07-15 16:41:48.624326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.311 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.624520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.624545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.624727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.624755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.624894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.624922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.625074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.625101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.625309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.625335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.625520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.625547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.625733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.625758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.625918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.625944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.626108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.626133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.626312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.626340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.626520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.626548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.626697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.626723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.626929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.626954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.627132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.627160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.627338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.627365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.627568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.627620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.627774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.627799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.628037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.628217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.628432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.628611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.628814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.628998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.629025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.629209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.629264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.629420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.629448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.629626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.629656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.629859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.629897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.630045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.630072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.630231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.630256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.630397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.630440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.630620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.630650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.630790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.630818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.631024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.631050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.631188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.631214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.631402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.631432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.631659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.631716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.631930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.631957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.632144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.632172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.632348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.632375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.632555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.632583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.632767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.632793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.632971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.633000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.633179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.633203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.633330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.633355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.633578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.633603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.633782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.633810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.634011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.634040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.634191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.634218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.634422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.634446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.634605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.634633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.634806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.634833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.635020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.635050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.635207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.635232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.635437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.635466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.635652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.635680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.312 [2024-07-15 16:41:48.635856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.312 [2024-07-15 16:41:48.635892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.312 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.636932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.636957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.637127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.637158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.637371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.637395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.637577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.637601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.637777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.637803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.637967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.637994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.638168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.638195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.638351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.638375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.638535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.638576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.638755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.638782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.638961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.638988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.639135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.639159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.639334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.639361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.639510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.639537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.639688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.639716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.639882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.639907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.640071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.640095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.640298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.640326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.640527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.640571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.640728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.640753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.640888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.640913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.641113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.641140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.641314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.641342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.641512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.641537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.641671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.641713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.641906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.641933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.642080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.642108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.642275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.642437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.642461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.642644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.642671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.642840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.642868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.643078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.643252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.643455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.643670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.643830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.643983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.644164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.644337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.644551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.644716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.644892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.644921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.645107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.313 [2024-07-15 16:41:48.645132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.313 qpair failed and we were unable to recover it. 00:25:09.313 [2024-07-15 16:41:48.645271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.645295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.645469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.645496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.645674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.645702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.645853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.645889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.646098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.646273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.646458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.646654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.646822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.646958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.647178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.647355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.647541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.647750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.647960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.647988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.648132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.648159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.648318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.648342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.648502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.648719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.648747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.648897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.648926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.649131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.649155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.649312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.649340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.649523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.649551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.649723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.649751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.649902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.650102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.650129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.650307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.650338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.650513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.650541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.650719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.650744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.650920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.650949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.651093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.651121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.651309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.651360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.651513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.651538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.651672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.651697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.651832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.651856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.652013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.652041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.652189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.652214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.652386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.652414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.652600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.652628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.652801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.652829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.653869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.653902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.654109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.654137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.654282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.654310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.654465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.654489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.654669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.654696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.654869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.654906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.655051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.655079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.655221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.655246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.655400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.655447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.655626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.655654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.314 [2024-07-15 16:41:48.655798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.314 [2024-07-15 16:41:48.655825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.314 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.655981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.656161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.656363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.656557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.656726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.656885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.656919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.657081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.657239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.657395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.657598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.657801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.657976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.658183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.658339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.658585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.658755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.658959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.658985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.659163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.659191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.659392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.659420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.659607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.659633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.659777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.659802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.659956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.659984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.660195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.660220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.660375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.660400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.660534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.660563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.660701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.660743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.660942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.660970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.661109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.661136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.661291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.661317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.661448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.661490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.661641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.661668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.661817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.661845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.662969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.662994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.663156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.663197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.663393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.663438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.663619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.663644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.663825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.663853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.664970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.664998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.665176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.665202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.665359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.665386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.665558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.665586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.665786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.665813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.665977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.666003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.666213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.666241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.666384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.666412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.666585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.315 [2024-07-15 16:41:48.666612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.315 qpair failed and we were unable to recover it. 00:25:09.315 [2024-07-15 16:41:48.666765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.666790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.666985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.667014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.667187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.667215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.667420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.667464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.667665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.667690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.667843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.667870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.668058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.668085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.668253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.668281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.668462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.668488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.668628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.668671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.668819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.668846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.669958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.669984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.670168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.670195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.670340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.670367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.670517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.670544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.670730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.670755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.670916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.670941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.671089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.671117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.671320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.671365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.671535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.671560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.671714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.671741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.671907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.671935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.672131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.672159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.672317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.672342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.672496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.672537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.672721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.672745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.672889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.672931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.673144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.673170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.673305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.673329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.673467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.673491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.673661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.673689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.673847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.673883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.674040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.674243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.674446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.674653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.674822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.674984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.675013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.675173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.675213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.675405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.675454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.675605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.675632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.675807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.675834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.675993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.676018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.676201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.676227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.316 [2024-07-15 16:41:48.676403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.316 [2024-07-15 16:41:48.676430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.316 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.676584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.676612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.676792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.676819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.676978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.677155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.677354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.677552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.677761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.677957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.677986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.678162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.678189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.678356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.678383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.678560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.678585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.678717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.678741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.678874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.678924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.679104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.679136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.679321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.679346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.679496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.679525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.679672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.679700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.679885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.679914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.680096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.680121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.680254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.680296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.680485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.680510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.680669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.680693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.680854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.680885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.681907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.682083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.682111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.682263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.682288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.682453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.682477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.682633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.682660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.682827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.682855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.683019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.683044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.683223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.683251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.683398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.683425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.683598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.683626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.683798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.684917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.684944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.685088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.685116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.685289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.685314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.685489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.685516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.685694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.685722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.685864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.685908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.686084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.686110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.686291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.686319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.686466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.686493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.686717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.686741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.686890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.686916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.317 [2024-07-15 16:41:48.687051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.317 [2024-07-15 16:41:48.687094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.317 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.687234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.687261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.687436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.687466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.687623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.687648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.687811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.687854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.688037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.688062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.688223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.688256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.688461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.688486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.688644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.688672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.688814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.688841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.689927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.689953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.690152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.690333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.690681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.690833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.690975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.691152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.691338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.691555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.691735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.691945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.691982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.692151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.692175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.692307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.692332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.692520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.692548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.692724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.692751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.692932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.692957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.693136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.693164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.693316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.693344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.693544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.693572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.693774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.693799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.693986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.694011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.694194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.694218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.694407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.694458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.694640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.694665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.694822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.694850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.695049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.695211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.695395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.695778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.695974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.696184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.696360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.696565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.696737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.696944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.696968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.697123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.697151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.697329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.697361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.697541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.697568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.697747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.697771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.697986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.318 [2024-07-15 16:41:48.698014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.318 qpair failed and we were unable to recover it. 00:25:09.318 [2024-07-15 16:41:48.698235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.698262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.698449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.698477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.698688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.698713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.698866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.698900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.699099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.699272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.699455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.699619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.699783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.699983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.700036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.700220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.700245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.700452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.700480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.700678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.700706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.700887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.700914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.701101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.701126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.701257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.701282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.701441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.701465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.701628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.701657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.701862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.701893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.702058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.702085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.702294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.702322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.702477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.702506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.702661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.702685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.702821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.702850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.703058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.703084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.703300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.703563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.703588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.703748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.703776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.703961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.703986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.704162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.704190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.704338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.704364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.704543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.704570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.704724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.704752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.704986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.705035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.705209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.705234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.705397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.705441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.705611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.705639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.705816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.705843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.706214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.706400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.706603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.706813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.706985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.707150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.707355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.707757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.707936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.707964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.708177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.708201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.708391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.319 [2024-07-15 16:41:48.708416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.319 qpair failed and we were unable to recover it. 00:25:09.319 [2024-07-15 16:41:48.708605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.708633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.708780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.708808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.708967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.708995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.709198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.709223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.709371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.709399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.709551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.709579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.709758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.709785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.709963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.709988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.710160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.710188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.710367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.710394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.710638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.710682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.710890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.710915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.711103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.711131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.711343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.711371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.711599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.711649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.711819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.711844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.712035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.712064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.712231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.712259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.712472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.712671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.712696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.712874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.712910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.713083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.713331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.713379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.713584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.713609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.713738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.713937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.713966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.714172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.714337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.714362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.714535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.714563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.714738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.714765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.714947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.714972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.715129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.715154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.715326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.715354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.715504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.715532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.715708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.715736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.715888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.715913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.716046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.716087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.716241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.716269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.716465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.716492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.716701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.716726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.716882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.716915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.717095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.717123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.717300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.717325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.717487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.717512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.717664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.717689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.717818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.717859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.718061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.718087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.718284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.718309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.718501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.718529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.718702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.718730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.718984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.719035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.719246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.719271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.719469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.719494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.719634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.320 [2024-07-15 16:41:48.719675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.320 qpair failed and we were unable to recover it. 00:25:09.320 [2024-07-15 16:41:48.719832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.719860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.720086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.720111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.720266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.720294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.720469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.720496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.720651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.720679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.720889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.720915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.721098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.721126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.721273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.721301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.721450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.721478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.721683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.721708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.721859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.721894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.722041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.722068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.722249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.722277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.722466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.722496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.722638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.722681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.722825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.722852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.723060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16360e0 is same with the state(5) to be set 00:25:09.321 [2024-07-15 16:41:48.723297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.723336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.723531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.723561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.723730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.723774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.723903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.723931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.724077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.724103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.724260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.724305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.724584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.724635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.724800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.724827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.724972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.724998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.725164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.725193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.725388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.725437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.725616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.725800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.725826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.725962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.725988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.726165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.726194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.726391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.726434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.726630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.726673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.726838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.726863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.727035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.727080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.727252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.727278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.727484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.727528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.727693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.727723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.727897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.727940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.728101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.728126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.728346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.728548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.728575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.728748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.728775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.728921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.728946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.729119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.729144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.729306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.729334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.729575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.729603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.729790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.729831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.730002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.730028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.730187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.730211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.730358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.730383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.730562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.730590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.730830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.730858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.731026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.731191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.321 [2024-07-15 16:41:48.731215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.321 qpair failed and we were unable to recover it. 00:25:09.321 [2024-07-15 16:41:48.731399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.731427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.731564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.731591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.731732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.731759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.731954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.731977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.732176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.732203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.732378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.732405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.732585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.732612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.732789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.732817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.733028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.733196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.733435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.733614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.733815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.733994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.734020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.734182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.734224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.734406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.734433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.734618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.734645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.734794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.734822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.734995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.735151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.735372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.735536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.735696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.735915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.735942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.736132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.736157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.736289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.736318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.736474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.736498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.736661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.736685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.736864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.736897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.737053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.737081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.737262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.737286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.737431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.737455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.737642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.737667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.737888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.737914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.738095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.738123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.738269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.738297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.738505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.738529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.738717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.738745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.738922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.738950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.739142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.739167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.739328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.739352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.739540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.739564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.739730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.739755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.739933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.739961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.740138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.740166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.740346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.740370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.740549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.740576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.740753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.740780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.740962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.740988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.741166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.741363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.741389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.741557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.741582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.741737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.741766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.741924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.741952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.742157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.742181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.322 [2024-07-15 16:41:48.742322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.322 [2024-07-15 16:41:48.742346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.322 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.742531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.742556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.742752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.742776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.742957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.742985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.743157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.743184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.743390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.743415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.743570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.743598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.743776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.743800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.743965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.743990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.744123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.744148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.744311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.744353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.744563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.744588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.744727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.744752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.744909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.744952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.745156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.745181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.745339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.745366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.745554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.745580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.745746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.745771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.745977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.746005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.746181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.746209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.746393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.746418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.746600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.746628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.746829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.746857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.747088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.747113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.747295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.747322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.747533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.747561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.747765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.747790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.747962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.747990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.748204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.748229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.748416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.748441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.748622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.748650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.748794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.748975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.749001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.749156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.749200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.749364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.749391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.749576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.749601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.749780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.749808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.750952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.750980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.751184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.751212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.751418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.751443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.751618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.751645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.751816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.751843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.752005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.752032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.752209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.752236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.323 qpair failed and we were unable to recover it. 00:25:09.323 [2024-07-15 16:41:48.752413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.323 [2024-07-15 16:41:48.752438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.752595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.752619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.752831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.752859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.753293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.753490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.753662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.753837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.753984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.754199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.754402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.754555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.754775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.754942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.754968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.755128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.755174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.755379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.755404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.755595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.755626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.755774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.755802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.755955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.755984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.756143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.756169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.756305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.756348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.756527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.756555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.756704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.756729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.756869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.756899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.757061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.757086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.757247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.757272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.757451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.757479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.757619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.757646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.757820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.757845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.758010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.758038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.758248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.758275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.758458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.758483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.758647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.758672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.758839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.758864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.759058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.759083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.759265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.759294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.759465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.759493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.759675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.759838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.759864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.760026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.760068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.760242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.760266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.760454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.760482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.760652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.760680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.760829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.760858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.761056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.761284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.761470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.761620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.761820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.761993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.762180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.762363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.762524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.762732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.762961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.762989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.324 [2024-07-15 16:41:48.763152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.324 [2024-07-15 16:41:48.763177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.324 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.763355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.763382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.763543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.763572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.763724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.763917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.763945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.764124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.764151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.764355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.764380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.764539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.764566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.764746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.764951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.764977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.765140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.765165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.765320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.765345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.765506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.765531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.765718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.765745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.765955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.765984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.766161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.766186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.766391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.766419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.766573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.766602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.766791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.766816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.767001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.767026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.767214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.767242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.767400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.767426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.767610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.767638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.767841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.767869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.768895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.768931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.769131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.769156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.769334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.769363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.769527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.769554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.769715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.769739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.769902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.769928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.770091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.770116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.770265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.770289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.770443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.770472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.770616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.770644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.770856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.770887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.771045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.771073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.771214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.771242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.771434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.771459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.771633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.771660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.771882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.771908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.772948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.772976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.773128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.773153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.773327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.773355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.773501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.773529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.773738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.773762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.773951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.773980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.325 qpair failed and we were unable to recover it. 00:25:09.325 [2024-07-15 16:41:48.774158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.325 [2024-07-15 16:41:48.774190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.774364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.774389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.774562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.774590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.774722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.774749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.774896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.774922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.775126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.775153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.775321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.775349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.775503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.775528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.775696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.775738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.775919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.775947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.776161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.776187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.776361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.776388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.776565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.776593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.776776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.776801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.777954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.777980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.778111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.778135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.778323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.778350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.778506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.778531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.778725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.778749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.778932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.778957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.779079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.779279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.779309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.779510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.779542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.779724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.779749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.779950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.779979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.780152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.780180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.780335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.780360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.780532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.780559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.780738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.780766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.780946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.780972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.781175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.781202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.781351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.781379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.781554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.781579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.781769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.781964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.781993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.782159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.782184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.782317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.782343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.782493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.782518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.782679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.782703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.782863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.782895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.783054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.783082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.783272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.783296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.783451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.783479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.783627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.783655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.326 [2024-07-15 16:41:48.783805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.326 [2024-07-15 16:41:48.783830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.326 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.783983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.784195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.784373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.784530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.784744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.784958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.784984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.785167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.785194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.785370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.785395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.785525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.785550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.785755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.785782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.785930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.785958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.786165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.786190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.786355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.786380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.786516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.786541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.786726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.786751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.786929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.786957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.787135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.787163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.787325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.787350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.787508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.787532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.787690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.787718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.787894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.787920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.788052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.788076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.788235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.788275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.788454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.788657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.788685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.788866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.788900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.789082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.789107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.789256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.789283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.789451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.789478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.789656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.789681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.789904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.789933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.790107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.790135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.790344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.790368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.790526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.790554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.790709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.790736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.790886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.790911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.791085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.791113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.791296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.791321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.791483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.791508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.791667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.791694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.791865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.791904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.792091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.792117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.792260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.792287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.792462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.792489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.792665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.792690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.792901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.792929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.793098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.793125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.793309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.793334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.793514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.793541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.793743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.793771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.793976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.794001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.794184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.794212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.794388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.794416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.794597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.794622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.794753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.794779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.327 [2024-07-15 16:41:48.794994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.327 [2024-07-15 16:41:48.795022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.327 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.795209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.795234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.795408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.795436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.795580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.795607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.795765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.795790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.795991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.796019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.796196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.796224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.796410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.796435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.796610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.796638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.796814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.796841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.797027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.797053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.797197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.797224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.797429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.797456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.797653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.797678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.797855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.797902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.798071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.798099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.798269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.798294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.798422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.798468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.798668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.798886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.798912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.799082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.799110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.799286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.799313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.799498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.799523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.799705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.799732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.799870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.799906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.800091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.800116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.800263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.800291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.800459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.800692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.800935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.800961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.801144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.801331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.801480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.801667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.801814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.801986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.802014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.802217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.802245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.802390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.802415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.802591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.802792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.803023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.803048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.803233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.803261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.803405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.803432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.803601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.803626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.803793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.803826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.804033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.804193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.804364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.804574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.804795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.804979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.805161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.805362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.805565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.805778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.805967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.805993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.328 [2024-07-15 16:41:48.806204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.328 [2024-07-15 16:41:48.806232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.328 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.806380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.806408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.806560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.806585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.806743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.806768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.806987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.807015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.807197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.807222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.807401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.807428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.807579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.807607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.807757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.807782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.807985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.808221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.808399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.808558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.808745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.808932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.808957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.809922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.809948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.810109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.810136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.810310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.810338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.810520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.810545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.810749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.810777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.810942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.810970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.811178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.811202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.811385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.811413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.811560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.811587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.811737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.811939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.811967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.812140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.812167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.812371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.812396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.812603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.812631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.812838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.812866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.813048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.813074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.813252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.813279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.813440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.813467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.813645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.813669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.813847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.813874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.814064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.814091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.814269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.814294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.814439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.814467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.814613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.814641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.814815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.814856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.815018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.815043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.815234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.815262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.815413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.815439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.815641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.815668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.815847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.329 [2024-07-15 16:41:48.815874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.329 qpair failed and we were unable to recover it. 00:25:09.329 [2024-07-15 16:41:48.816075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.816100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.816274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.816301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.816475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.816502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.816659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.816684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.816812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.816836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.817059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.817084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.817248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.817277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.817462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.817487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.817669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.817697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.817886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.817911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.818090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.818117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.818256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.818284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.818468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.818493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.818648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.818676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.818849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.818884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.819071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.819096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.819305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.819333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.819511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.819538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.819718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.819743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.819913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.819942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.820146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.820174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.820333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.820358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.820539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.820566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.820719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.820747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.820905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.820930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.821056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.821097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.821282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.821307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.821457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.821482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.821639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.821664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.821825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.821854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.822057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.822083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.822222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.822247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.822454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.822482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.822639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.822668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.822884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.822913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.823118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.823145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.823324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.823348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.823518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.823546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.823697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.823724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.823904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.823930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.824342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.824370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.824539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.824563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.824769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.824797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.824974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.825174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.825371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.825572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.825778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.825926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.825968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.826137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.826165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.826347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.826372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.826550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.826578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.826753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.826780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.826958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.826983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.330 qpair failed and we were unable to recover it. 00:25:09.330 [2024-07-15 16:41:48.827138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.330 [2024-07-15 16:41:48.827179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.827383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.827411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.827565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.827590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.827761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.827788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.827946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.828130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.828160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.828310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.828338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.828484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.828512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.828700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.828725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.828928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.828956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.829125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.829153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.829325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.829350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.829555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.829582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.829749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.829777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.829937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.829962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.830144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.830172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.830323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.830350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.830522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.830547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.830725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.830753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.830938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.830966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.831118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.831143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.831280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.831305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.831491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.831516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.831712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.831737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.831946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.831974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.832176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.832204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.832388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.832413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.832563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.832590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.832762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.832790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.832949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.832975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.833102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.833297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.833325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.833531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.833556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.833759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.833787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.833937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.833966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.834141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.834165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.834375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.834403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.834601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.834629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.834776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.834801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.834963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.834989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.835148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.835174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.835328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.835352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.835513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.835537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.835693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.835736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.835917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.835942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.836119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.836147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.836323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.836357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.836532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.836557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.836735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.836763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.836946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.836971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.837105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.837132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.837272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.837297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.837458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.837483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.837643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.837668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.837801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.837843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.838005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.331 [2024-07-15 16:41:48.838031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.331 qpair failed and we were unable to recover it. 00:25:09.331 [2024-07-15 16:41:48.838164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.838189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.838395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.838422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.838600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.838625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.838759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.838783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.838963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.838992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.839202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.839230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.839379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.839404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.839577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.839604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.839773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.839801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.839986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.840143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.840332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.840542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.840746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.840942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.840971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.841126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.841151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.841307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.841332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.841505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.841537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.841700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.841725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.841907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.841936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.842108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.842135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.842338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.842362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.842495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.842520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.842681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.842706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.842880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.842905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.843062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.843089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.843262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.843289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.843470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.843496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.843704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.843731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.843886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.843914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.844123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.844280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.844655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.844843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.844981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.845165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.845365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.845538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.845740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.845901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.845927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.846079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.846104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.846254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.846279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.846456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.846483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.846632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.846663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.846814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.846839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.847013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.847056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.847263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.847291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.847447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.847471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.847606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.332 [2024-07-15 16:41:48.847647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.332 qpair failed and we were unable to recover it. 00:25:09.332 [2024-07-15 16:41:48.847792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.847820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.847971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.847996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.848148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.848189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.848369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.848396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.848602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.848627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.848809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.848837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.849075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.849238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.849449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.849835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.849999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.850027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.850197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.850250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.850408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.850433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.850613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.850641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.850801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.851044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.851072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.851251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.851279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.851458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.851489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.851670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.851696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.851888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.851918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.852100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.852131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.852265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.852291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.852455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.852480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.852638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.852663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.852827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.852852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.853032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.853057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.853214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.853244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.853446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.853471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.853656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.853685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.853864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.853901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.854059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.854084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.854259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.854287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.854538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.854589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.854759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.854784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.854975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.855005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.855155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.855183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.855400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.855425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.855596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.855624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.855801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.855829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.856954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.856983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.857171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.857197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.857375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.857404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.857581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.857610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.857789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.857816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.857971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.857997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.858174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.858201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.858386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.858411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.858562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.858592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.858801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.858827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.333 [2024-07-15 16:41:48.859020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.333 [2024-07-15 16:41:48.859046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.333 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.859256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.859284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.859435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.859463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.859617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.859643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.859851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.859885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.860078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.860103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.860254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.860461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.860489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.860710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.860765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.860968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.860993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.861124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.861149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.861304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.861332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.861537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.861562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.861720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.861748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.861908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.861937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.862153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.862179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.862359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.862388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.862562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.862592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.862768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.862794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.862976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.863163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.863353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.863539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.863725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.863928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.863954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.864137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.864164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.864336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.864364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.864540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.864566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.864772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.864801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.865024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.865049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.865177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.865202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.865378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.865406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.865681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.865730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.865945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.865971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.866164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.866192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.866369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.866397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.866551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.866576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.866756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.866785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.866937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.866966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.867144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.867169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.867301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.867342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.867594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.867824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.867849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.867995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.868021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.868198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.868226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.868400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.868425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.868612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.868640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.868827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.868852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.868987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.869013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.869190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.869218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.869362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.869390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.869602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.869627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.869801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.869994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.870021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.870211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.334 [2024-07-15 16:41:48.870237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.334 qpair failed and we were unable to recover it. 00:25:09.334 [2024-07-15 16:41:48.870392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.870421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.870594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.870622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.870793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.870819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.870949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.870993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.871179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.871204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.871399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.871425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.871557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.871583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.871746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.871789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.871946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.871971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.872113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.872138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.872354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.872578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.872603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.872788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.872816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.872994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.873229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.873412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.873586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.873764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.873943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.873991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.874175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.874203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.874348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.874374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.874579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.874607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.874750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.874778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.874937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.874963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.875145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.875173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.875384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.875412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.875615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.875641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.875850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.875884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.876038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.876066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.876220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.876245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.876419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.876447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.876621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.876650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.876831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.876857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.877017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.877045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.877217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.877246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.877393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.877418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.877553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.877578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.877768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.877796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.878955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.878981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.879142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.879167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.879382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.879411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.879561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.879589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.335 qpair failed and we were unable to recover it. 00:25:09.335 [2024-07-15 16:41:48.879744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.335 [2024-07-15 16:41:48.879770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.879902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.879949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.880127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.880156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.880344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.880371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.880531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.880575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.880756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.880784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.880995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.881021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.881191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.881219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.881424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.881452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.881631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.881657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.881838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.881866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.882067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.882101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.336 qpair failed and we were unable to recover it. 00:25:09.336 [2024-07-15 16:41:48.882307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.336 [2024-07-15 16:41:48.882333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.882510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.882539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.882743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.882772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.882958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.882984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.883169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.883198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.883371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.883399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.883556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.883581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.883783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.884008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.884037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.884241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.884267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.884444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.884472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.884645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.884673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.884820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.884846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.885941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.617 [2024-07-15 16:41:48.885967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.617 qpair failed and we were unable to recover it. 00:25:09.617 [2024-07-15 16:41:48.886120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.886148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.886328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.886356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.886526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.886552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.886711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.886737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.886922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.886952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.887109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.887134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.887269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.887312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.887480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.887509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.887680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.887705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.887909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.888076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.888104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.888293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.888318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.888463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.888491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.888662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.888690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.888843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.888869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.889033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.889061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.889261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.889289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.889449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.889475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.889639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.889883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.889912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.890092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.890121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.890304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.890332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.890510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.890719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.890744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.890925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.890954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.891110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.891139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.891321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.891485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.891510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.891668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.891693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.891853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.891888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.892076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.892102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.892272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.892301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.892485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.892511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.892716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.892744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.892927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.892956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.893166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.893191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.893374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.893402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.893571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.893600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.893776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.893802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.893934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.893976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.894133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.894160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.894318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.894344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.894521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.894549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.894731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.894758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.894948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.894974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.895157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.895185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.895328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.895356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.895549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.895575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.895735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.895760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.895901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.895930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.896129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.896154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.896333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.896538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.896566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.896707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.896732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.896861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.897089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.897117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.897296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.897495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.897523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.897726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.897754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.897917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.897943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.898122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.898154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.898305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.898333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.898485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.898510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.898673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.618 [2024-07-15 16:41:48.898715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.618 qpair failed and we were unable to recover it. 00:25:09.618 [2024-07-15 16:41:48.898905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.898932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.899097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.899122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.899334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.899363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.899537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.899566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.899742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.899767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.899951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.899980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.900176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.900205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.900421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.900446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.900591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.900619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.900772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.900801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.900973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.900999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.901129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.901154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.901285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.901311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.901511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.901536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.901708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.901736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.901939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.901967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.902125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.902151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.902322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.902350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.902489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.902517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.902664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.902690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.902865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.902903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.903070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.903098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.903250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.903275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.903464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.903492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.903693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.903721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.903886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.903912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.904090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.904289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.904468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.904631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.904793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.904998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.905024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.905234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.905262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.905427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.905455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.905639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.905664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.905841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.905868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.906049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.906081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.906255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.906429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.906459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.906670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.906699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.906911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.906937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.907075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.907100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.907258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.907284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.907444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.907469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.907648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.907676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.907846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.907874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.908054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.908079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.908238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.908263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.908472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.908500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.908704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.908729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.908896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.908921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.909126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.909154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.909360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.909386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.909597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.909625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.909802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.909830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.910019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.910044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.910253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.910281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.910487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.910516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.910728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.910754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.910945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.910973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.911151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.911179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.911357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.911382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.911540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.911565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.911751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.619 [2024-07-15 16:41:48.911779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.619 qpair failed and we were unable to recover it. 00:25:09.619 [2024-07-15 16:41:48.911961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.911987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.912139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.912323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.912353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.912565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.912590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.912741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.912769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.912951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.912978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.913140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.913166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.913327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.913355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.913531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.913559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.913710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.913735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.913874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.913905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.914124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.914152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.914307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.914338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.914517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.914545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.914728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.914753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.914940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.914966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.915119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.915144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.915280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.915321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.915476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.915501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.915660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.915685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.915873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.915908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.916066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.916228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.916270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.916419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.916447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.916606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.916632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.916787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.917042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.917068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.917197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.917222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.917397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.917640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.917665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.917826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.917852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.918021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.918047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.918221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.918249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.918454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.918479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.918623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.918651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.918828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.918855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.919035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.919062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.919277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.919305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.919447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.919476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.919664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.919690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.919871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.919906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.920086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.920114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.920301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.920326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.920477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.920503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.920709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.920737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.920912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.920939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.921114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.921142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.921340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.921368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.921579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.921604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.921788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.921816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.922004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.922032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.922216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.922241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.922427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.922459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.922611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.922641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.922845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.922873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.923086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.620 [2024-07-15 16:41:48.923112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.620 qpair failed and we were unable to recover it. 00:25:09.620 [2024-07-15 16:41:48.923325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.923354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.923560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.923586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.923727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.923753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.923919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.923945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.924093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.924119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.924262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.924290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.924492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.924521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.924698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.924724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.924860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.924891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.925016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.925042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.925207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.925232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.925438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.925467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.925636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.925664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.925874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.926074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.926103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.926302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.926327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.926514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.926539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.926734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.926762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.926918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.927128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.927153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.927318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.927360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.927561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.927589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.927763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.927789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.927940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.927969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.928119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.928148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.928325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.928351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.928558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.928587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.928771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.928796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.928960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.928986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.929129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.929157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.929338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.929366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.929552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.929578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.929758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.929786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.929960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.929989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.930161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.930186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.930322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.930366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.930538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.930570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.930789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.930815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.931237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.931422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.931605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.931818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.931991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.932017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.932197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.932225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.932372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.932401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.932552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.932578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.932784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.932813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.932992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.933021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.933228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.933254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.933417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.933445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.933649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.933677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.933831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.933857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.933997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.934023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.934210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.934238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.934446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.934471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.934622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.934650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.934792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.934821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.935006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.935033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.935186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.935212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.935359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.935388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.935563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.935588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.935793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.935822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.936023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.621 [2024-07-15 16:41:48.936050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.621 qpair failed and we were unable to recover it. 00:25:09.621 [2024-07-15 16:41:48.936185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.936212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.936419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.936618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.936820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.936845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.937063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.937092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.937293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.937321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.937479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.937505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.937693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.937718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.937915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.938129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.938155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.938356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.938384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.938556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.938584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.938738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.938768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.938957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.938986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.939163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.939191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.939370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.939396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.939527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.939552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.939715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.939758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.939944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.939970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.940129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.940154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.940314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.940340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.940498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.940523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.940709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.940737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.940945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.940971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.941100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.941126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.941306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.941335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.941478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.941690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.941716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.941888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.942094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.942123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.942276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.942301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.942466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.942491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.942612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.942637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.942827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.942852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.943023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.943052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.943230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.943255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.943444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.943469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.943650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.943678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.943829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.943857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.944079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.944105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.944258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.944286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.944462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.944491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.944673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.944698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.944863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.944905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.945039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.945065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.945248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.945273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.945424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.945451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.945632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.945660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.945843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.945869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.946061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.946089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.946235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.946265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.946443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.946469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.946609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.946639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.946835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.946860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.947061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.947087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.947265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.947295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.947447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.947476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.947655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.947680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.947855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.947891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.948071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.948099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.948252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.948278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.948408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.948449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.948622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.948651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.948852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.948884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.949099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.949127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.622 [2024-07-15 16:41:48.949268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.622 [2024-07-15 16:41:48.949296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.622 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.949481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.949506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.949637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.949679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.949858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.950080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.950105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.950287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.950315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.950488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.950516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.950695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.950720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.950896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.950925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.951077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.951105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.951252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.951278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.951447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.951472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.951633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.951658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.951807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.951832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.952022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.952051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.952194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.952223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.952432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.952457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.952638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.952666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.952851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.952897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.953046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.953071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.953202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.953242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.953442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.953470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.953668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.953693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.953886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.953914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.954070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.954099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.954254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.954279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.954412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.954454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.954610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.954642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.954824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.954850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.955022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.955051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.955236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.955271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.955475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.955501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.955681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.955709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.955888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.955917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.956101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.956127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.956319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.956347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.956501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.956531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.956733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.956763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.956948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.956974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.957110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.957135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.957341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.957366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.957549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.957577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.957748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.957777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.957958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.957985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.958161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.958189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.958364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.958393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.958574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.958600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.958779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.958807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.958983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.959160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.959345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.959566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.959749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.959902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.959928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.960096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.960121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.960277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.960302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.960458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.960689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.960717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.960879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.623 [2024-07-15 16:41:48.960906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.623 qpair failed and we were unable to recover it. 00:25:09.623 [2024-07-15 16:41:48.961063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.961105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.961261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.961289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.961472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.961498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.961647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.961676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.961853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.961900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.962111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.962136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.962343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.962371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.962554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.962582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.962758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.962787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.962929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.962955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.963089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.963114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.963274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.963299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.963475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.963504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.963651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.963679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.963859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.963891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.964100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.964128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.964278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.964306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.964493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.964518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.964700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.964728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.964885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.964914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.965081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.965287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.965313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.965495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.965524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.965706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.965731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.965849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.965900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.966072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.966101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.966249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.966275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.966455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.966484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.966663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.966691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.966869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.966903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.967076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.967105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.967283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.967311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.967492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.967519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.967703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.967731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.967883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.967912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.968099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.968126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.968272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.968301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.968495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.968522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.968679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.968705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.968847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.969070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.969097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.969270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.969294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.969476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.969504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.969655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.969684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.969875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.969908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.970065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.970090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.970256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.970286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.970448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.970474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.970652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.970681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.970860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.970910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.971086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.971114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.971312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.971338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.971488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.971528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.971700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.971727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.971923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.971954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.972122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.972150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.972344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.972381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.972550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.972783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.972812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.972998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.973026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.973211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.973240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.973428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.973453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.973620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.973647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.973805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.973839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.974010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.974037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.974206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.624 [2024-07-15 16:41:48.974232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.624 qpair failed and we were unable to recover it. 00:25:09.624 [2024-07-15 16:41:48.974383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.974578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.974614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.974808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.974834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.975954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.975981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.976172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.976207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.976445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.976474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.976663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.976690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.976872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.977086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.977115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.977304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.977330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.977520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.977550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.977765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.977792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.977931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.977958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.978115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.978152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.978338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.978371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.978536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.978562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.978719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.978744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.978912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.978942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.979123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.979149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.979335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.979364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.979514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.979554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.979744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.979770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.979959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.979989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.980169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.980201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.980397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.980424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.980567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.980592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.980784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.980811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.981009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.981036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.981182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.981210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.981351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.981379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.981594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.981628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.981815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.981840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.982015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.982041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.982184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.982210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.982431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.982460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.982640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.982669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.982868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.982907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.983107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.983135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.983314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.983353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.983542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.983568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.983729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.983758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.983911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.983940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.984147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.984174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.984335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.984365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.984545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.984577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.984770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.984797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.984957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.984988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.985166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.985194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.985361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.985397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.985572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.985607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.985745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.985788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.985978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.986005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.986132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.986165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.625 [2024-07-15 16:41:48.986386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.625 [2024-07-15 16:41:48.986414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.625 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.986599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.986624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.986783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.986809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.987004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.987033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.987213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.987384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.987410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.987574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.987618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.987808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.987834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.988049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.988076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.988240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.988286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.988481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.988507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.988727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.988757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.988948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.988975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.989118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.989144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.989295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.989329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.989521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.989558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.989718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.989743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.989913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.989939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.990079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.990106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.990239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.990264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.990485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.990514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.990669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.990708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.990883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.990909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.991120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.991150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.991366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.991396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.991556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.991581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.991768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.991797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.992002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.992038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.992197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.992413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.992442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.992593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.992621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.992817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.992847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.993072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.993101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.993252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.993280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.993434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.993461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.993603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.993807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.993832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.994063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.994253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.994282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.994454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.994482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.994640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.994670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.994849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.994885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.995094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.995125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.995307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.995465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.995508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.995692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.995721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.995910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.995937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.996118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.996147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.996320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.996348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.996500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.996526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.996681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.996726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.996909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.996938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.997097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.997123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.997302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.997331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.997508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.997537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.997713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.997739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.997951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.997992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.998135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.998163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.998333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.998359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.998502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.998527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.998710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.998738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.998925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.998951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.999096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.999126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.999293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.999322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.999480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.999507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.626 qpair failed and we were unable to recover it. 00:25:09.626 [2024-07-15 16:41:48.999678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.626 [2024-07-15 16:41:48.999703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:48.999914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:48.999944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.000121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.000147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.000353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.000381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.000520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.000554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.000723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.000901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.000951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.001123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.001163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.001322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.001349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.001532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.001562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.001769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.001798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.001983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.002010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.002195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.002224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.002407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.002447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.002656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.002682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.002864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.002901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.003051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.003284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.003310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.003542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.003571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.003769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.003798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.004923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.005166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.005192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.005330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.005355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.005512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.005538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.005678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.005705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.005840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.005866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.006033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.006063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.006218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.006254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.006447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.006475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.006671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.006700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.006881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.006907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.007084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.007112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.007259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.007298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.007501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.007527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.007704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.007732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.007874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.007924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.008267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.008454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.008659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.008831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.008993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.009024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.009155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.009180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.009395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.009424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.009572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.009602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.009808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.009844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.010069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.010100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.010315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.010352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.010537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.010564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.010757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.010796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.010983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.011012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.011171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.011197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.011360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.011408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.011633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.011661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.011847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.011889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.012109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.012145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.012323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.012351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.012535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.012561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.012749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.012779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.012935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.012965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.627 [2024-07-15 16:41:49.013119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.627 [2024-07-15 16:41:49.013145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.627 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.013276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.013319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.013509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.013540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.013737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.013763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.013978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.014007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.014227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.014256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.014412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.014437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.014624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.014654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.014838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.014869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.015059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.015085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.015230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.015256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.015391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.015420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.015620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.015646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.015822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.015850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.016034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.016063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.016235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.016262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.016401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.016426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.016613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.016643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.016863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.016896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.017061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.017091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.017248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.017278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.017499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.017532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.017710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.017739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.017894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.017924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.018073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.018100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.018310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.018490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.018518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.018678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.018712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.018909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.018940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.019090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.019118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.019295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.019320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.019532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.019562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.019776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.019802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.019946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.019983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.020135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.020163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.020339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.020368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.020550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.020575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.020738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.020768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.020945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.020981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.021123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.021147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.021308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.021337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.021490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.021519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.021696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.021723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.021883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.021913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.022061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.022092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.022308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.022341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.022530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.022558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.022746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.022775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.022949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.023128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.023157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.023302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.023540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.023566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.023723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.023751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.023898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.023928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.024093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.024120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.024258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.024311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.024473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.024502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.024663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.024689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.024821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.024865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.025055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.025084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.025267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.628 [2024-07-15 16:41:49.025293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.628 qpair failed and we were unable to recover it. 00:25:09.628 [2024-07-15 16:41:49.025478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.025511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.025685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.025714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.025937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.025964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.026104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.026130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.026328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.026358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.026564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.026745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.026771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.026958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.026988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.027175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.027201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.027340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.027368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.027550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.027577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.027754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.027781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.027934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.027962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.028173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.028201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.028391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.028417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.028586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.028616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.028791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.028819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.029014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.029041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.029206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.029234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.029441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.029479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.029666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.029696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.029863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.029917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.030104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.030133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.030291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.030316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.030495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.030524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.030717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.030743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.030888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.030915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.031068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.031108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.031336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.031365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.031528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.031563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.031698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.031724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.031891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.031935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.032088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.032119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.032323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.032352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.032531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.032562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.032764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.032790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.032985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.033187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.033376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.033531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.033751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.033957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.033991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.034147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.034175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.034327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.034359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.034571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.034598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.034793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.034823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.035023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.035186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.035222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.035375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.035400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.035568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.035623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.035815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.035843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.036042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.036070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.036251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.036279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.036449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.036485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.036682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.036718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.036897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.036927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.037103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.037128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.037285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.037310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.037445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.037471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.037602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.037632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.037860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.037896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.038046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.038075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.038236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.038261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.038415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.038465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.038663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.038870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.038903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.039052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.039080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.629 [2024-07-15 16:41:49.039236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.629 [2024-07-15 16:41:49.039264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.629 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.039454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.039480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.039660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.039689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.039837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.039866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.040039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.040064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.040229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.040254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.040419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.040448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.040607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.040632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.040774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.040835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.041029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.041056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.041219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.041245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.041405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.041446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.041588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.041845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.041892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.042067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.042096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.042271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.042301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.042468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.042493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.042673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.042704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.042892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.042922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.043126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.043161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.043327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.043354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.043501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.043529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.043711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.043737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.043888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.043918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.044107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.044136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.044290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.044316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.044499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.044529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.044694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.044723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.044905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.044932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.045107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.045135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.045299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.045329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.045512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.045537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.045743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.045770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.045922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.046130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.046157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.046287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.046329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.046549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.046584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.046724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.046759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.046972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.047001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.047183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.047212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.047418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.047640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.047668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.047824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.047863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.048060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.048235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.048432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.048634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.048828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.048996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.049022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.049191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.049218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.049423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.049456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.049607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.049635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.049809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.049835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.050034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.050067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.050273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.050301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.050480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.050504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.050685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.050712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.050893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.050922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.051130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.051155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.051350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.051378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.051608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.051784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.051808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.051994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.052022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.052201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.052229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.052436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.052461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.630 qpair failed and we were unable to recover it. 00:25:09.630 [2024-07-15 16:41:49.052653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.630 [2024-07-15 16:41:49.052682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.052857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.052895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.053148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.053173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.053365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.053391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.053590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.053618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.053805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.053831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.054003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.054029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.054160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.054185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.054346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.054372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.054552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.054580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.054760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.054788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.055033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.055059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.055212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.055241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.055424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.055449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.055609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.055634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.055793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.055821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.056033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.056211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.056420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.056620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.056824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.056980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.057010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.057184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.057213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.057370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.057395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.057578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.057606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.057743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.057771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.057978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.058004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.058186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.058215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.058385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.058420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.058604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.058629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.058839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.058867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.059036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.059064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.059246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.059271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.059414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.059442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.059660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.059688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.059867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.059901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.060051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.060078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.060229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.060257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.060413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.060438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.060573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.060822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.060850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.061973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.061999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.062158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.062184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.062358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.062386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.062561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.062590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.062805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.062831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.062999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.063028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.063185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.063213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.063422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.063447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.063626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.063654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.063795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.063823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.063990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.064017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.064153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.064196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.631 [2024-07-15 16:41:49.064369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.631 [2024-07-15 16:41:49.064398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.631 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.064574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.064599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.064803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.064831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.065963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.065989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.066123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.066149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.066311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.066340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.066494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.066521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.066695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.066723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.066911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.066937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.067062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.067087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.067272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.067300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.067448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.067474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.067653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.067681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.067818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.067845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.068034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.068059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.068272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.068301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.068486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.068512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.068675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.068701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.068855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.068906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.069092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.069120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.069326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.069350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.069504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.069529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.069715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.069743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.069905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.069931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.070117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.070142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.070300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.070341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.070524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.070550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.070707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.070731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.070857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.070913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.071099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.071125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.071319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.071347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.071520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.071548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.071731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.071757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.071937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.071966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.072113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.072141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.072321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.072346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.072486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.072511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.072642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.072669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.072828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.072853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.073084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.073114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.073299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.073327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.073514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.073538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.073701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.073727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.073887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.073916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.074103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.074129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.074338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.074370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.074548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.074575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.074756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.074780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.074959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.074988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.075189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.075344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.075369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.075531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.075574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.075776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.075804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.075973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.075998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.076135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.076162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.076340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.076378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.076561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.076587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.076783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.076811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.076997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.077025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.077178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.077203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.632 [2024-07-15 16:41:49.077335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.632 [2024-07-15 16:41:49.077380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.632 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.077528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.077555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.077735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.077760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.077973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.078182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.078387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.078561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.078774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.078962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.078988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.079192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.079220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.079362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.079390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.079598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.079623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.079801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.079832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.080042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.080071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.080258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.080283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.080459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.080487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.080656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.080684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.080893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.080918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.081081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.081108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.081290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.081318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.081496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.081522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.081693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.081721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.081900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.082090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.082115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.082283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.082481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.082510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.082719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.082744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.082931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.082959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.083131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.083159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.083336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.083362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.083545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.083573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.083757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.083785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.083964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.083989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.084153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.084203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.084357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.084385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.084529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.084554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.084727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.084754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.084901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.084945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.085106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.085131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.085321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.085349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.085521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.085549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.085751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.085775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.085961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.085991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.086166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.086195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.086405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.086430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.086585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.086613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.086782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.086809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.086982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.087178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.087383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.087593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.087757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.087941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.087970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.088132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.088157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.088339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.088546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.088573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.088735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.088760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.633 [2024-07-15 16:41:49.088946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.633 [2024-07-15 16:41:49.088974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.633 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.089160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.089185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.089314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.089338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.089519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.089546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.089693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.089723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.089909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.089934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.090112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.090142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.090298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.090325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.090473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.090498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.090639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.090665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.090819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.090846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.091040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.091246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.091273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.091434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.091463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.091648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.091673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.091890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.091919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.092055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.092083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.092233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.092258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.092416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.092442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.092574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.092599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.092786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.092811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.093001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.093030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.093207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.093235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.093443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.093468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.093643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.093670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.093821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.093848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.094043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.094217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.094398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.094600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.094809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.094993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.095023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.095203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.095227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.095408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.095436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.095612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.095640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.095821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.095850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.095988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.096014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.096200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.096225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.096412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.096437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.096589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.096617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.096789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.096817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.097006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.097033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.097193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.097221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.097401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.097430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.097635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.097660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.097843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.097871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.098052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.098077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.098258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.098284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.098457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.098485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.098631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.098659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.098864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.098896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.099071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.099099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.099278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.099306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.099489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.099515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.099651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.099676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.099846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.099871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.100025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.100052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.100233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.100262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.100453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.100479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.100665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.100691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.100896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.100925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.101098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.101127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.101299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.101325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.101508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.101536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.101728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.101753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.101889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.101915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.634 qpair failed and we were unable to recover it. 00:25:09.634 [2024-07-15 16:41:49.102098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.634 [2024-07-15 16:41:49.102126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.102317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.102345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.102527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.102552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.102755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.102783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.103004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.103030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.103181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.103206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.103383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.103411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.103609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.103869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.104071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.104104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.104289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.104317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.104471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.104497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.104626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.104668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.104838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.104865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.105076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.105101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.105315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.105342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.105515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.105543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.105700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.105726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.105902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.105946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.106126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.106154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.106314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.106340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.106507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.106531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.106663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.106690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.106859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.106892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.107029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.107054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.107224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.107249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.107413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.107438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.107609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.107637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.107779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.107809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.108024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.108050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.108203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.108233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.108407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.108436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.108647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.108672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.108863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.108898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.109075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.109103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.109312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.109337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.109489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.109516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.109690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.109718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.109902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.109929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.110106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.110134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.110286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.110314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.110494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.110518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.110681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.110705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.110833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.110858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.111097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.111122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.111305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.111332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.111479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.111506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.111686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.111710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.111867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.111904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.112087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.112119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.112313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.112338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.112487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.112515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.112690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.112717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.112894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.112921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.113073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.113101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.113305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.113331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.113485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.113510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.113650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.113675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.113828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.113853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.114024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.114049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.114238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.114266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.114445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.114473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.114626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.114651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.114832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.114860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.115048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.115076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.115232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.635 [2024-07-15 16:41:49.115256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.635 qpair failed and we were unable to recover it. 00:25:09.635 [2024-07-15 16:41:49.115431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.115458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.115633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.115662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.115864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.115908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.116058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.116085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.116235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.116259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.116419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.116445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.116602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.116626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.116838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.116865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.117035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.117059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.117203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.117398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.117423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.117610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.117635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.117815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.117842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.118054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.118078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.118244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.118268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.118455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.118483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.118662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.118846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.118871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.119054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.119081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.119263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.119290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.119474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.119499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.119663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.119688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.119837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.119866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.120058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.120086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.120263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.120467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.120650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.120675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.120852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.120896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.121048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.121075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.121253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.121279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.121483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.121511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.121653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.121680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.121843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.121867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.122054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.122083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.122279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.122489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.122514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.122652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.122676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.122842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.122895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.123045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.123071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.123210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.123440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.123467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.123626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.123652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.123840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.123868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.124059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.124087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.124265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.124473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.124500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.124702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.124730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.124890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.124917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.125056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.125100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.125302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.125329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.125538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.125563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.125718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.125745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.125917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.125946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.126101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.126127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.126265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.126307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.126479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.126506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.126685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.126710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.126858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.126893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.636 qpair failed and we were unable to recover it. 00:25:09.636 [2024-07-15 16:41:49.127068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.636 [2024-07-15 16:41:49.127095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.127242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.127266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.127429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.127470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.127640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.127667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.127845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.127873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.128085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.128114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.128334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.128361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.128513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.128538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.128746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.128774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.128964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.128990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.129149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.129174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.129318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.129346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.129497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.129526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.129688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.129712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.129897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.129923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.130140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.130167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.130365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.130391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.130546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.130575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.130746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.130774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.130972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.130998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.131158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.131183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.131362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.131389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.131566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.131591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.131757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.131782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.131939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.131965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.132128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.132153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.132336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.132364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.132539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.132567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.132771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.132795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.133005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.133032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.133187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.133215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.133417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.133442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.133631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.133659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.133835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.133863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.134857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.134893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.135082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.135106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.135259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.135286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.135492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.135520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.135696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.135720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.135903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.135931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.136116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.136148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.136302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.136327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.136504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.136532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.136679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.136706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.136918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.136943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.137099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.137127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.137329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.137356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.137562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.137587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.137763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.137790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.137950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.137978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.138139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.138164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.138306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.138332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.138568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.138596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.138753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.138778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.138970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.138998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.139144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.139172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.139351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.139376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.139528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.139555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.139726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.139753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.139974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.140000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.140138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.637 [2024-07-15 16:41:49.140163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.637 qpair failed and we were unable to recover it. 00:25:09.637 [2024-07-15 16:41:49.140344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.140372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.140529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.140554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.140718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.140761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.140958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.140984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.141143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.141168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.141353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.141380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.141557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.141585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.141788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.141814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.142000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.142028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.142201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.142230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.142386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.142411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.142615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.142643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.142821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.142849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.143066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.143250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.143458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.143662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.143847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.143993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.144018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.144180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.144209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.144395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.144422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.144566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.144595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.144776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.144801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.144996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.145025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.145189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.145213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.145347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.145371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.145578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.145606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.145793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.145821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.145997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.146022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.146199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.146226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.146368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.146396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.146603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.146628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.146806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.146833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.146991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.147019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.147201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.147226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.147370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.147398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.147572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.147600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.147791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.147817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.148021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.148050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.148229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.148256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.148443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.148468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.148677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.148705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.148886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.148915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.149092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.149118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.149298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.149326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.149496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.149523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.149700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.149724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.149942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.149970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.150125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.150154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.150333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.150358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.150526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.150554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.150698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.150725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.150941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.150982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.151116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.151142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.151327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.151351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.151516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.151541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.151697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.151739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.151910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.151939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.152148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.152173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.152372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.152404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.152605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.152633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.152820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.152845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.153038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.153067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.153240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.153450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.153475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.638 [2024-07-15 16:41:49.153657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.638 [2024-07-15 16:41:49.153684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.638 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.153871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.153902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.154091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.154117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.154296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.154325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.154465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.154493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.154653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.154678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.154854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.154890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.155033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.155062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.155247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.155272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.155439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.155463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.155620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.155646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.155807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.155834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.156031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.156060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.156205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.156233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.156424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.156449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.156629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.156656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.156804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.156831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.157019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.157045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.157200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.157228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.157401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.157429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.157577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.157601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.157777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.157820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.158959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.158985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.159149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.159191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.159366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.159391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.159535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.159579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.159783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.159811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.159986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.160012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.160191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.160218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.160400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.160588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.160613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.160793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.160821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.160978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.161194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.161378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.161582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.161762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.161953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.161997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.162172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.162200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.162386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.162411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.162577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.162602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.162736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.162761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.162923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.162949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.163155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.163183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.163369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.163394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.163550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.163575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.163768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.163796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.163954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.163983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.164138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.164163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.164334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.164361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.164515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.164751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.164776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.164931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.164960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.165165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.639 [2024-07-15 16:41:49.165192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.639 qpair failed and we were unable to recover it. 00:25:09.639 [2024-07-15 16:41:49.165400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.165425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.165573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.165600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.165808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.165836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.166016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.166043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.166221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.166249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.166399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.166427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.166637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.166662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.166809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.166837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.167006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.167032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.167217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.167242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.167398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.167425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.167600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.167628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.167828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.167853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.168956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.168981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.169141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.169165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.169349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.169377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.169579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.169604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.169756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.169785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.169990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.170018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.170228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.170408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.170436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.170636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.170665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.170821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.170846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.171014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.171039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.171197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.171226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.171434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.171459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.171633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.171660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.171810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.171837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.172024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.172049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.172203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.172231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.172410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.172438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.172646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.172827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.172855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.173009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.173038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.173255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.173281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.173445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.173469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.173658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.173683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.173854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.173885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.174063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.174091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.174256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.174283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.174494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.174519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.174700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.174729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.174898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.174927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.175134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.175159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.175339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.175367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.175539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.175567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.175746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.175772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.175955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.175984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.176163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.176193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.176371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.176396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.176516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.176563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.176725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.176753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.176933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.176967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.177121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.177149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.177350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.177378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.177519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.177544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.177678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.177722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.177908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.177937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.178094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.178119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.178336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.178511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.178539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.178740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.178768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.178955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.178981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.179153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.179183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.179374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.179400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.179576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.179604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.179754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.179783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.179927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.179953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.180111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.180137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.180328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.180355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.180542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.180568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.180726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.180751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.180905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.180934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.181110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.181135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.181339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.181367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.181543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.181571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.181750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.181775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.181989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.182167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.182370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.182578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.182748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.182938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.182964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.183151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.183177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.183364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.183392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.183573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.183600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.183806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.640 [2024-07-15 16:41:49.183835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.640 qpair failed and we were unable to recover it. 00:25:09.640 [2024-07-15 16:41:49.184045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.184073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.184254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.184279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.184443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.184468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.184688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.184715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.184907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.184933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.185069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.185252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.185278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.185437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.185462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.185650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.185675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.185850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.185885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.186069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.186094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.186274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.186303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.186508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.186536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.186710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.186912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.186941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.187119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.187144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.187330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.187355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.187517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.187542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.187713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.187741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.187930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.187955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.188138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.188167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.188311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.188338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.188518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.188543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.188682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.188706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.188862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.188894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.189056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.189081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.189291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.189319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.189469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.189497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.189648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.189675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.189801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.189841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.190011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.190041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.190199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.190225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.190432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.190461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.190609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.190636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.190786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.190811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.191047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.191191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.191218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.191377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.191403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.191586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.641 [2024-07-15 16:41:49.191614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.641 qpair failed and we were unable to recover it. 00:25:09.641 [2024-07-15 16:41:49.191827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.191854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.192084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.192110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.192296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.192326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.192484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.192511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.192693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.192717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.192904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.192934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.193114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.193143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.193321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.193346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.193522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.193550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.193726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.193750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.193872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.193906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.194068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.194093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.194252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.194280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.194481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.194506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.194728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.194756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.194930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.194958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.195125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.195151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.195274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.195313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.195490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.195518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.195675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.195699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.195831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.195872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.196025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.196052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.196245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.196270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.196452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.196480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.920 qpair failed and we were unable to recover it. 00:25:09.920 [2024-07-15 16:41:49.196652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.920 [2024-07-15 16:41:49.196680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.196855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.196886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.197017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.197060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.197265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.197293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.197451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.197476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.197608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.197650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.197828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.197855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.198064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.198249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.198451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.198634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.198795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.198980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.199006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.199177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.199202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.199364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.199388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.199595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.199623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.199779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.199804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.199992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.200018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.200204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.200232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.200420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.200444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.200634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.200659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.200830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.200858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.201049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.201075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.201250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.201277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.201432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.201460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.201644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.201669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.201872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.201909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.202086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.202114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.202280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.202304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.202463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.202487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.202666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.202695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.202869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.202914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.203054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.203079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.203211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.203235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.203423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.203448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.203604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.203632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.203816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.203844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.204034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.204059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.204216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.204245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.204420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.204449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.204665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.921 [2024-07-15 16:41:49.204690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.921 qpair failed and we were unable to recover it. 00:25:09.921 [2024-07-15 16:41:49.204870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.204907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.205085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.205113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.205293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.205494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.205522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.205732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.205757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.205889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.205915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.206084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.206116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.206295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.206323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.206479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.206504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.206711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.206739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.206913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.206941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.207122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.207146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.207320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.207348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.207534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.207562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.207710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.207735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.207873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.207918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.208128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.208154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.208364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.208389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.208564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.208592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.208774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.208801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.209034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.209207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.209415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.209599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.209786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.209973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.210176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.210357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.210527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.210740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.210927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.210956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.211130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.211159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.211340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.211575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.211602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.211748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.211777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.211953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.211979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.212137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.212164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.212377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.922 [2024-07-15 16:41:49.212404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.922 qpair failed and we were unable to recover it. 00:25:09.922 [2024-07-15 16:41:49.212583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.212745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.212772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.212960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.212986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.213181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.213206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.213386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.213415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.213559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.213586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.213792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.213817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.213999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.214028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.214201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.214233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.214411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.214436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.214643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.214671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.214886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.214912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.215048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.215073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.215213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.215256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.215401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.215633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.215658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.215838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.215866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.216047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.216075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.216256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.216281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.216464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.216492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.216668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.216695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.216858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.216895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.217115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.217143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.217321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.217349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.217531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.217764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.217791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.217961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.217989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.218167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.218192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.218373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.923 [2024-07-15 16:41:49.218401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.923 qpair failed and we were unable to recover it. 00:25:09.923 [2024-07-15 16:41:49.218609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.218634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.218773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.218798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.218953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.218982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.219183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.219212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.219418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.219443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.219621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.219648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.219808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.220001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.220027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.220166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.220191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.220384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.220411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.220596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.220621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.220801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.220829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.221005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.221033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.221190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.221216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.221403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.221428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.221644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.221673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.221856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.221890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.222028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.222053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.222241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.222266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.222472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.222501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.222682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.222711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.222922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.222951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.223132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.223158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.223342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.223371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.223522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.223551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.223736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.223761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.223922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.223948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.224150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.224178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.224364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.224389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.224574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.224599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.924 [2024-07-15 16:41:49.224779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.924 [2024-07-15 16:41:49.224807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.924 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.224987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.225013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.225266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.225294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.225502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.225530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.225699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.225724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.225925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.225954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.226126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.925 [2024-07-15 16:41:49.226154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.925 qpair failed and we were unable to recover it. 00:25:09.925 [2024-07-15 16:41:49.226301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.226325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.226465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.226490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.226706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.226734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.226893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.226919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.227056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.227098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.227246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.227274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.227490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.227697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.227725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.227867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.227903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.228086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.228112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.228250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.228276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.228463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.228489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.228692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.228717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.228963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.228991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.229141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.229170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.229336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.229362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.229538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.229566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.229738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.229767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.229930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.229956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.230094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.230120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.230279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.230305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.230498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.230523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.230678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.230711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.230918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.230947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.231153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.231178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.926 [2024-07-15 16:41:49.231322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.926 [2024-07-15 16:41:49.231350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.926 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.231642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.231693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.231846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.231870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.232039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.232082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.232285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.232311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.232502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.232527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.232711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.232739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.232888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.232916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.233099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.233124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.233279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.233304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.233460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.233487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.233673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.233700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.233873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.233908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.234083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.234111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.234314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.234338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.234490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.234517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.234696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.234723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.234894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.234919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.235103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.235131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.235272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.235300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.235502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.235527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.235708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.235735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.235886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.235915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.236121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.236147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.236361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.236388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.236557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.236583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.236837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.236862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.237051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.237078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.237282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.237469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.237493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.237672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.237700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.237871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.237904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.927 qpair failed and we were unable to recover it. 00:25:09.927 [2024-07-15 16:41:49.238091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.927 [2024-07-15 16:41:49.238117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.238255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.238279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.238417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.238444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.238603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.238628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.238777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.238804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.238992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.239021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.239209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.239234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.239423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.239635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.239660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.239828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.240051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.240079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.240299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.240323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.240509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.240534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.240688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.240715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.240928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.241106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.241374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.241402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.241604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.241632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.241789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.241815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.241956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.241982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.242171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.242199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.242405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.242430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.242561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.242586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.242721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.242745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.242943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.242969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.243142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.243171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.243361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.243389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.928 qpair failed and we were unable to recover it. 00:25:09.928 [2024-07-15 16:41:49.243564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.928 [2024-07-15 16:41:49.243589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.243740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.243767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.243940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.243966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.244118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.244143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.244292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.244319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.244475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.244502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.244688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.244713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.244859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.244893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.245072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.245099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.245285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.245309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.245452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.245481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.245664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.245691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.245846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.245870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.246051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.246079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.246283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.246310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.246501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.246527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.246705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.246733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.246908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.246938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.247140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.247168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.247322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.247349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.247524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.247553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.247732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.247757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.247952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.247980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.248158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.248185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.248370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.248394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.248599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.248626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.248802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.248830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.249028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.249233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.249261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.249440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.249467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.929 qpair failed and we were unable to recover it. 00:25:09.929 [2024-07-15 16:41:49.249643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.929 [2024-07-15 16:41:49.249667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.249808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.249833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.249983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.250164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.250341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.250546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.250728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.250968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.250996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.251201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.251225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.251380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.251405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.251539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.251581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.251754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.251782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.251943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.251968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.252156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.252181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.252394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.252422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.252611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.252636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.252820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.252848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.253955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.253998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.254176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.254203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.254386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.254411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.254618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.254646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.254800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.254827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.255010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.255036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.255210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.930 [2024-07-15 16:41:49.255243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.930 qpair failed and we were unable to recover it. 00:25:09.930 [2024-07-15 16:41:49.255446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.255474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.255657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.255681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.255816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.255840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.255988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.256178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.256384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.256556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.256742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.256935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.256961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.257120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.257161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.257371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.257396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.257605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.257633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.257812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.257841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.258030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.258057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.258208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.258235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.258408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.258436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.258616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.258817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.258846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.259036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.259062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.259248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.259273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.259485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.259512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.259675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.259702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.259886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.259912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.260119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.260147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.260307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.260332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.931 [2024-07-15 16:41:49.260493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.931 [2024-07-15 16:41:49.260517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.931 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.260682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.260706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.260884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.260912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.261068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.261093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.261229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.261271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.261450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.261658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.261683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.261859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.261894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.262936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.262965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.263136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.263170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.263345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.263369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.263549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.263576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.263751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.263779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.263924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.263950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.264163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.264190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.264386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.264413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.264564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.264589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.264791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.264819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.264992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.265021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.932 qpair failed and we were unable to recover it. 00:25:09.932 [2024-07-15 16:41:49.265231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.932 [2024-07-15 16:41:49.265256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.265409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.265436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.265647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.265672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.265832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.265856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.266025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.266240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.266266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.266425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.266450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.266609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.266634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.266903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.266932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.267145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.267170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.267351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.267381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.267560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.267588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.267766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.267791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.267936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.267962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.268119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.268161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.268316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.268343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.268559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.268733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.268762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.268936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.268963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.269153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.269181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.269384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.269412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.269575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.269601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.269769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.269796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.269979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.270137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.270409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.270588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.270780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.270957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.933 [2024-07-15 16:41:49.270983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.933 qpair failed and we were unable to recover it. 00:25:09.933 [2024-07-15 16:41:49.271167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.271196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.271377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.271406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.271583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.271612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.271784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.271812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.271969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.271994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.272252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.272280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.272453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.272481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.272661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.272686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.272856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.272892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.273091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.273120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.273294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.273319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.273498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.273527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.273711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.273740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.273948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.273974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.274169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.274197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.274374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.274402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.274643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.274668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.274810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.274836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.274998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.275041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.275223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.275248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.275453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.275481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.275693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.275718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.275886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.275912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.276073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.276098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.276279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.276309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.276487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.276513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.276693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.276720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.276968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.934 [2024-07-15 16:41:49.276997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.934 qpair failed and we were unable to recover it. 00:25:09.934 [2024-07-15 16:41:49.277184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.277210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.277418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.277446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.277587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.277616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.277804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.277829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.277964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.277991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.278172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.278201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.278384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.278410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.278577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.278602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.278766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.278791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.278950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.278976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.279185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.279213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.279353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.279380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.279541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.279567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.279744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.279777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.279959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.279988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.280167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.280373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.280400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.280578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.280606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.280791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.280816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.281013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.281041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.281210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.281238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.281393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.281417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.281559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.281584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.281768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.281796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.282033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.282214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.282420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.282602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.282837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.935 [2024-07-15 16:41:49.282995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.935 [2024-07-15 16:41:49.283020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.935 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.283154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.283179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.283310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.283353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.283529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.283558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.283777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.283802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.283998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.284026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.284228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.284255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.284434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.284460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.284642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.284670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.284841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.284869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.285062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.285087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.285265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.285296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.285473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.285706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.285732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.285992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.286021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.286200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.286227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.286429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.286454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.286634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.286661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.286843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.286871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.287056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.287081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.287267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.287295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.287500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.287527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.287732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.287756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.287903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.287931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.288135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.288161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.288328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.288353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.288565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.288592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.288735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.288763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.288970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.288996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.936 qpair failed and we were unable to recover it. 00:25:09.936 [2024-07-15 16:41:49.289136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.936 [2024-07-15 16:41:49.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.289323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.289347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.289510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.289534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.289713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.289743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.289926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.289955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.290127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.290152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.290281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.290322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.290543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.290568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.290695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.290720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.290852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.290898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.291100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.291128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.291341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.291366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.291549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.291577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.291728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.291756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.291940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.291966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.292147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.292176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.292358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.292385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.292622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.292647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.292829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.292858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.293072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.293098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.293259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.293284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.293444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.293644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.293676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.293838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.293864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.294052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.294081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.294266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.294294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.294478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.294504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.294713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.294741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.937 [2024-07-15 16:41:49.294889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.937 [2024-07-15 16:41:49.294918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.937 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.295098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.295124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.295300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.295505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.295529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.295712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.295736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.295908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.295938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.296112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.296141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.296322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.296347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.296534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.296562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.296772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.296800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.296982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.297006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.297191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.297219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.297419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.297448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.297657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.297682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.297862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.297899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.298071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.298100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.298307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.298333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.298516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.298543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.298694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.298906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.298932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.299094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.299136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.299287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.299315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.938 [2024-07-15 16:41:49.299493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.938 [2024-07-15 16:41:49.299518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.938 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.299643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.299687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.299863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.299898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.300087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.300112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.300293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.300320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.300469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.300498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.300660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.300685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.300825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.300851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.301041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.301066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.301198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.301225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.301377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.301608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.301636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.301812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.301842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.302011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.302040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.302243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.302271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.302452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.302478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.302688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.302716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.302866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.302900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.303088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.303112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.303265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.303293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.303467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.303495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.303671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.303696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.303835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.303884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.304060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.304088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.304268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.304294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.304474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.304502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.304683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.304712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.304867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.304905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.305045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.939 [2024-07-15 16:41:49.305088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.939 qpair failed and we were unable to recover it. 00:25:09.939 [2024-07-15 16:41:49.305236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.305264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.305411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.305436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.305637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.305664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.305829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.305854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.306049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.306075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.306260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.306289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.306464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.306492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.306651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.306675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.306844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.306895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.307102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.307130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.307312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.307338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.307494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.307681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.307706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.307832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.307856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.308078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.308106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.308295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.308320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.308457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.308482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.308609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.308634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.308819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.308847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.309044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.309070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.309246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.309273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.309444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.309473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.309650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.309675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.309841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.309870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.310040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.310065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.310225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.310251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.310424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.310451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.310628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.310656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.940 [2024-07-15 16:41:49.310811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.940 [2024-07-15 16:41:49.310837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.940 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.311035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.311389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.311610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.311785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.311991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.312017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.312198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.312225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.312423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.312451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.312639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.312664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.312871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.312905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.313083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.313111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.313284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.313309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.313458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.313490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.313695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.313722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.313972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.313998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.314124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.314164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.314339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.314367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.314523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.314548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.314730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.314758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.314947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.315135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.315160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.315313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.315341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.315518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.315546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.315731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.315756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.315896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.315921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.316125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.316153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.316359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.316385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.316591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.316619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.941 qpair failed and we were unable to recover it. 00:25:09.941 [2024-07-15 16:41:49.316791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.941 [2024-07-15 16:41:49.316818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.316977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.317141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.317368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.317578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.317756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.317934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.317966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.318149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.318175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.318360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.318389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.318536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.318563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.318735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.318759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.318889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.318915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.319069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.319094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.319233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.319257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.319403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.319431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.319610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.319638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.319824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.319849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.320005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.320033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.320201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.320228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.320382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.320408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.320578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.320604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.320783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.320810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.321001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.321027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.321202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.321230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.321431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.321459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.321641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.321666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.321845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.321873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.322031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.942 [2024-07-15 16:41:49.322060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.942 qpair failed and we were unable to recover it. 00:25:09.942 [2024-07-15 16:41:49.322241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.322266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.322473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.322501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.322678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.322707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.322896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.322922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.323074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.323103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.323283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.323311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.323500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.323525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.323672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.323700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.323904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.323933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.324092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.324117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.324300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.324328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.324503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.324530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.324708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.324732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.324909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.324937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.325079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.325107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.325306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.325330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.325477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.325505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.325713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.325741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.325925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.325954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.326093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.326118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.326277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.326301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.326435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.326460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.326598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.326623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.326833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.326861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.327049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.327075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.327270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.327442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.327471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.327682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.327707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.943 [2024-07-15 16:41:49.327888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.943 [2024-07-15 16:41:49.327917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.943 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.328062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.328090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.328299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.328505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.328533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.328704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.328733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.328904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.328930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.329099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.329127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.329307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.329336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.329518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.329543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.329721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.329749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.329897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.329926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.330112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.330137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.330311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.330339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.330507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.330535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.330710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.330866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.330915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.331095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.331123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.331281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.331308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.331485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.331514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.331716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.331744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.331930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.331956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.332120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.332145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.332323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.944 [2024-07-15 16:41:49.332351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.944 qpair failed and we were unable to recover it. 00:25:09.944 [2024-07-15 16:41:49.332507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.332532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.332692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.332737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.332887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.332917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.333083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.333108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.333265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.333291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.333492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.333520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.333701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.333728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.333893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.333923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.334087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.334113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.334269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.334295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.334502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.334530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.334725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.334753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.334963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.334989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.335167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.335196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.335363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.335391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.335537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.335563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.335738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.335767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.335908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.335936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.336115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.336140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.336300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.336343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.336546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.336575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.336762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.336788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.336956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.336982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.337119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.337277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.337303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.337470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.337497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.337698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.337726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.337886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.337912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.945 [2024-07-15 16:41:49.338092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.945 [2024-07-15 16:41:49.338122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.945 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.338325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.338353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.338530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.338555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.338685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.338728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.338900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.338929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.339079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.339104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.339272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.339297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.339507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.339535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.339685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.339710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.339889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.339918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.340095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.340123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.340308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.340332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.340462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.340489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.340624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.340649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.340835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.340860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.341052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.341081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.341231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.341258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.341472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.341497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.341676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.341703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.341885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.341919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.342080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.342104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.342304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.342331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.342542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.342726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.342750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.342956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.342985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.343195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.343220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.343383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.343407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.343589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.343618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.343760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.946 [2024-07-15 16:41:49.343788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.946 qpair failed and we were unable to recover it. 00:25:09.946 [2024-07-15 16:41:49.343940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.343965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.344167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.344195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.344343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.344372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.344580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.344793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.344822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.344988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.345151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.345340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.345540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.345745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.345943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.345972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.346158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.346183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.346312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.346339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.346515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.346542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.346723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.346751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.346965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.346991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.347176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.347204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.347388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.347416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.347561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.347586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.347719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.347743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.347961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.347990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.348147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.348171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.348306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.348347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.348495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.348523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.348706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.348731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.348914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.348942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.349115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.349142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.947 [2024-07-15 16:41:49.349325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.947 [2024-07-15 16:41:49.349350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.947 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.349568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.349596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.349773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.349800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.349986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.350029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.350185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.350212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.350388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.350416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.350591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.350616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.350822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.350851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.351043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.351069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.351226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.351251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.351427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.351454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.351628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.351656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.351860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.351891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.352027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.352052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.352209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.352233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.352406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.352432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.352637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.352665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.352850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.352885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.353089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.353242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.353403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.353612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.353801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.353975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.354004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.354191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.354215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.354396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.354425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.354635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.354660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.354818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.948 [2024-07-15 16:41:49.354843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.948 qpair failed and we were unable to recover it. 00:25:09.948 [2024-07-15 16:41:49.355034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.355218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.355403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.355563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.355740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.355924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.355950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.356125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.356153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.356303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.356332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.356507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.356532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.356684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.356712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.356847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.356882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.357064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.357090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.357234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.357264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.357442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.357469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.357621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.357648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.357828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.357860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.358024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.358051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.358247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.358271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.358478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.358506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.358645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.358675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.358892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.358919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.359075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.359103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.359279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.359307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.359510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.359536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.359690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.359717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.359866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.359903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.360081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.949 [2024-07-15 16:41:49.360106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.949 qpair failed and we were unable to recover it. 00:25:09.949 [2024-07-15 16:41:49.360250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.360278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.360452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.360480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.360685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.360709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.360865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.360898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.361078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.361103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.361288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.361313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.361493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.361521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.361708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.361762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.361948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.361974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.362117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.362322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.362351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.362524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.362550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.362701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.362729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.362913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.362940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.363095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.363120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.363324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.363353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.363489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.363518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.363709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.363734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.363869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.363975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.364158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.950 [2024-07-15 16:41:49.364188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.950 qpair failed and we were unable to recover it. 00:25:09.950 [2024-07-15 16:41:49.364376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.364401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.364563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.364589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.364769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.364797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.364978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.365003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.365177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.365206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.365405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.365432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.365588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.365614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.365785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.365812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.365986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.366019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.366206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.366232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.366406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.366434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.366603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.366630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.366809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.366834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.367026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.367054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.367226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.367254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.367464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.367489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.367669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.367697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.367874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.368086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.368111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.368251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.368276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.368439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.368467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.368609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.368633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.368813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.368841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.369032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.369060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.369252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.369277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.951 qpair failed and we were unable to recover it. 00:25:09.951 [2024-07-15 16:41:49.369430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.951 [2024-07-15 16:41:49.369457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.369661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.369835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.369860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.370049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.370078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.370255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.370283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.370491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.370515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.370694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.370721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.370932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.370958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.371117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.371143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.371308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.371333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.371469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.371495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.371681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.371706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.371895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.371921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.372081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.372106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.372297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.372322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.372503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.372531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.372721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.372746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.372903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.372928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.373108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.373137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.373311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.373339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.373494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.373519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.373708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.373736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.373919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.373947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.374109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.374138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.374267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.374308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.374454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.952 [2024-07-15 16:41:49.374481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.952 qpair failed and we were unable to recover it. 00:25:09.952 [2024-07-15 16:41:49.374659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.374685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.374864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.374900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.375041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.375069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.375287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.375312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.375503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.375530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.375683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.375724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.375900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.375926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.376131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.376159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.376331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.376360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.376540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.376566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.376740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.376768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.376918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.376948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.377157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.377181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.377370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.377398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.377549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.377577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.377754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.377779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.377968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.377998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.378146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.378174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.378355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.378379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.378538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.378581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.378780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.378808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.379018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.379044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.379227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.379255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.379453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.379480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.379640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.953 [2024-07-15 16:41:49.379665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.953 qpair failed and we were unable to recover it. 00:25:09.953 [2024-07-15 16:41:49.379800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.379843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.380063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.380091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.380242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.380267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.380475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.380503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.380703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.380731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.380970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.380996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.381153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.381194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.381372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.381399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.381576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.381601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.381809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.381837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.382024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.382050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.382208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.382234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.382414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.382448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.382630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.382656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.382818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.382843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.383011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.383037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.383228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.383253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.383448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.383473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.383662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.383691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.383892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.383921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.384102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.384127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.384301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.384329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.384477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.384505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.384682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.384913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.384951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.954 [2024-07-15 16:41:49.385127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.954 [2024-07-15 16:41:49.385155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.954 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.385305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.385330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.385489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.385533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.385688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.385716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.385882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.385907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.386075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.386099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.386260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.386286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.386440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.386465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.386640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.386667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.386831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.386859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.387021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.387047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.387207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.387250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.387421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.387448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.387631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.387656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.387863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.387900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.388117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.388142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.388300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.388325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.388533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.388560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.388740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.388765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.388925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.388952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.389163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.389191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.389391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.389419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.389611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.389636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.389846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.389873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.390078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.390106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.390315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.390340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.390520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.390547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.390725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.390750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.390912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.390938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.391147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.391174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.391354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.955 [2024-07-15 16:41:49.391381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.955 qpair failed and we were unable to recover it. 00:25:09.955 [2024-07-15 16:41:49.391559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.391584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.391759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.391787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.391963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.391992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.392170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.392194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.392374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.392402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.392719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.392769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.392959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.392984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.393170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.393198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.393372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.393400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.393571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.393596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.393730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.393772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.393949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.393976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.394154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.394179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.394396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.394424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.394610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.394638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.394845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.394870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.395065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.395093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.395264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.395291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.395446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.395471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.395602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.395644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.395790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.395816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.396017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.396245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.396274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.396444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.396476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.396636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.396661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.396800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.396845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.397009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.397034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.397187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.397212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.397359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.956 [2024-07-15 16:41:49.397387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.956 qpair failed and we were unable to recover it. 00:25:09.956 [2024-07-15 16:41:49.397556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.397583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.397792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.397817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.397968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.397997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.398200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.398228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.398410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.398435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.398563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.398606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.398789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.398818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.398992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.399018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.399209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.399236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.399407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.399434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.399616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.399642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.399831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.399859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.400048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.400077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.400278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.400303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.400531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.400556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.400689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.400714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.400882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.400908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.401086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.401115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.401316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.401345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.401524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.401550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.401699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.401915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.401944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.402151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.402176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.957 qpair failed and we were unable to recover it. 00:25:09.957 [2024-07-15 16:41:49.402381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.957 [2024-07-15 16:41:49.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.402582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.402610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.402811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.402836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.402997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.403027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.403232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.403260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.403444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.403469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.403600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.403625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.403829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.403857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.404052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.404256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.404438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.404609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.404809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.404982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.405011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.405174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.405199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.405354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.405379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.405555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.405583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.405788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.405813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.405975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.406178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.406361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.406563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.406761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.406932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.406958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.407086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.407111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.407246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.407271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.407426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.407451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.407630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.407658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.407825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.958 [2024-07-15 16:41:49.407853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.958 qpair failed and we were unable to recover it. 00:25:09.958 [2024-07-15 16:41:49.408066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.408092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.408291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.408319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.408456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.408484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.408688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.408713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.408934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.408960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.409150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.409175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.409372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.409397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.409583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.409611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.409780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.409809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.409971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.409997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.410162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.410187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.410368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.410396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.410572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.410597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.410774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.410802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.410981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.411010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.411184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.411209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.411380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.411408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.411609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.411637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.411789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.411815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.412022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.412051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.412257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.412285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.412438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.412463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.412666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.412698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.412897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.412926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.413072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.413228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.413269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.413484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.413512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.413694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.413720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.959 qpair failed and we were unable to recover it. 00:25:09.959 [2024-07-15 16:41:49.413889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.959 [2024-07-15 16:41:49.413917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.414118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.414146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.414353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.414379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.414589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.414618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.414788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.414816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.415002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.415027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.415203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.415230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.415435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.415464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.415623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.415650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.415830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.415859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.416014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.416038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.416199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.416224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.416350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.416376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.416556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.416583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.416792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.416817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.417017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.417046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.417199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.417228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.417436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.417461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.417642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.417669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.417817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.417846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.418059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.418084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.418274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.418302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.418492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.418517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.418705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.418730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.418915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.418944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.419099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.419126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.419314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.419339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.419505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.960 [2024-07-15 16:41:49.419530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.960 qpair failed and we were unable to recover it. 00:25:09.960 [2024-07-15 16:41:49.419714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.419739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.419900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.419925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.420052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.420077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.420241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.420267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.420427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.420451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.420603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.420630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.420813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.420842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.421007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.421033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.421214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.421241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.421415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.421443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.421604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.421629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.421799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.421827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.422912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.422940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.423091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.423119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.423268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.423435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.423476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.423651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.423679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.423857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.423888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.424040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.424069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.424216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.424244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.424449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.424473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.424638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.424665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.424839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.424867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.425069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.425095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.425302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.425330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.961 qpair failed and we were unable to recover it. 00:25:09.961 [2024-07-15 16:41:49.425501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.961 [2024-07-15 16:41:49.425529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.425732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.425756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.425921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.425946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.426100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.426129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.426336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.426362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.426535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.426563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.426772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.426799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.426955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.426981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.427113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.427139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.427300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.427325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.427519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.427544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.427695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.427723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.427881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.427910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.428097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.428123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.428300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.428329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.428512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.428539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.428743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.428771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.428950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.428978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.429116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.429144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.429309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.429489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.429513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.429684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.429712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.429868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.429900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.430063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.430107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.430241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.430268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.430477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.430502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.430717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.430745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.430922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.430951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.431110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.431136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.431289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.431313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.431474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.431502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.962 [2024-07-15 16:41:49.431664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.962 [2024-07-15 16:41:49.431689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.962 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.431854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.431884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.432046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.432071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.432228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.432253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.432458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.432486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.432636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.432664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.432820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.432847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.433046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.433074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.433257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.433285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.433459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.433484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.433624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.433650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.433833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.433858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.434030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.434055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.434235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.434265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.434455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.434484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.434671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.434697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.434848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.434882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.435062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.435090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.435298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.435502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.435531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.435731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.435758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.435937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.435963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.436145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.436172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.436371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.436399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.436586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.963 [2024-07-15 16:41:49.436610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.963 qpair failed and we were unable to recover it. 00:25:09.963 [2024-07-15 16:41:49.436783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.436816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.436965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.436994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.437174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.437199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.437374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.437402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.437577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.437605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.437777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.437802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.437956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.437984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.438182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.438209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.438395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.438421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.438610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.438777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.438805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.438961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.438986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.439146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.439361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.439389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.439582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.439607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.439773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.439798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.439949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.440107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.440132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.440296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.440321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.440523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.440550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.440729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.440753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.440942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.440970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.441142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.441170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.441352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.441378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.441557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.441784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.441812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.442008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.442033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.442249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.442277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.442425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.442453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.442642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.442668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.442845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.442872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.443056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.443083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.964 [2024-07-15 16:41:49.443265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.964 [2024-07-15 16:41:49.443290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.964 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.443464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.443492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.443641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.443669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.443855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.443885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.444083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.444111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.444291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.444316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.444474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.444499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.444633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.444657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.444812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.444857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.445068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.445428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.445631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.445819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.445998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.446181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.446382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.446616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.446777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.446958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.446983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.447206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.447231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.447360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.447569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.447597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.447796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.447824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.448004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.448029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.448169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.448194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.448379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.448404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.448600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.448625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.448834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.448862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.449044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.449072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.449277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.449302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.449484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.449511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.449650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.449678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.449858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.965 [2024-07-15 16:41:49.449889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.965 qpair failed and we were unable to recover it. 00:25:09.965 [2024-07-15 16:41:49.450069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.450098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.450276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.450482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.450508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.450644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.450687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.450903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.450932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.451119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.451144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.451316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.451344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.451547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.451575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.451766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.451792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.451951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.451980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.452157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.452185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.452363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.452389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.452565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.452592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.452768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.452796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.452976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.453007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.453185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.453213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.453364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.453391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.453569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.453595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.453774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.453803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.453997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.454022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.454184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.454208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.454390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.454417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.454590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.454618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.454802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.454827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.455002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.455031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.455236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.455263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.455449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.455473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.455659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.455686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.455842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.455870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.456062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.456087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.456237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.456264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.456440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.456467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.966 [2024-07-15 16:41:49.456651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.966 [2024-07-15 16:41:49.456676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.966 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.456860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.456894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.457047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.457075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.457273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.457298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.457505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.457534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.457712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.457741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.457895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.457921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.458104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.458133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.458310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.458337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.458492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.458516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.458679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.458705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.458861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.458898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.459082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.459107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.459292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.459319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.459503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.459531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.459690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.459716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.459918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.459947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.460128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.460333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.460494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.460644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.460952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.461155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.461359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.461564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.461735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.461916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.461941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.462078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.462120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.462268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.462296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.462504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.462529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.462688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.462715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.462891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.462919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.463103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.463128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.463297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.967 [2024-07-15 16:41:49.463325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.967 qpair failed and we were unable to recover it. 00:25:09.967 [2024-07-15 16:41:49.463466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.463493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.463677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.463702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.463891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.463920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.464090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.464118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.464324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.464349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.464552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.464581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.464759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.464786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.464948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.464973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.465134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.465161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.465378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.465404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.465587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.465613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.465829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.465857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.466065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.466292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.466468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.466673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.466856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.466999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.467041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.467251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.467280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.467491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.467517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.467697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.467725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.467883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.467912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.468116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.468141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.468319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.468346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.468520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.468548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.468704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.468730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.468892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.469140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.469171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.469382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.968 [2024-07-15 16:41:49.469408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.968 qpair failed and we were unable to recover it. 00:25:09.968 [2024-07-15 16:41:49.469592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.469620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.469791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.469819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.469974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.470186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.470402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.470559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.470719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.470935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.470963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.471116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.471140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.471303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.471328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.471507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.471534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.471726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.471751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.471902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.472130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.472158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.472363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.472388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.472565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.472593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.472775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.472801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.472977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.473140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.473323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.473506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.473737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.473964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.473992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.474166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.474192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.474352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.474393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.474573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.474601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.474819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.474844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.475040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.475246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.475427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.475624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.475801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.475980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.476006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.476138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.476163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.476297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.476322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.969 qpair failed and we were unable to recover it. 00:25:09.969 [2024-07-15 16:41:49.476478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.969 [2024-07-15 16:41:49.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.476686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.476714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.476853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.476889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.477043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.477073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.477245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.477273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.477413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.477441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.477647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.477672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.477811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.477838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.478002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.478028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.478214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.478239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.478424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.478452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.478621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.478649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.478824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.478849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.479070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.479099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.479314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.479340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.479470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.479495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.479702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.479730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.479946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.479975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.480132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.480158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.480342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.480370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.480544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.480573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.480722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.480747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.480901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.480927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.481111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.481137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.481340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.481365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.481522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.481547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.481728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.481756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.481975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.482158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.482334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.482544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.482750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.482914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.482943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.483131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.970 [2024-07-15 16:41:49.483157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.970 qpair failed and we were unable to recover it. 00:25:09.970 [2024-07-15 16:41:49.483338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.483366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.483531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.483558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.483732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.483757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.483935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.483964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.484136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.484164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.484345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.484370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.484550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.484578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.484778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.485039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.485247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.485459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.485667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.485828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.485988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.486147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.486307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.486461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.486659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.486904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.486933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.487078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.487106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.487255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.487280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.487410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.487451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.487661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.487686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.487829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.487855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.488026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.488071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.488298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.488325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.488497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.488524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.488765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.488818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.489962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.489992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.490171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.490201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.490381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.490407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.490634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.971 [2024-07-15 16:41:49.490688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 16:41:49.490860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.490896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.491105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.491130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.491313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.491342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.491543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.491572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.491778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.491804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.492904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.492930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.493117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.493145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.493351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.493380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.493600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.493626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.493801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.494043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.494069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.494208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.494233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.494369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.494395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.494597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.494627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.494834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.494860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.495034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.495064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.495244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.495272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.495468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.495494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.495683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.495713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 16:41:49.495900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.972 [2024-07-15 16:41:49.495931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:09.972 qpair failed and we were unable to recover it. 00:25:10.252 [2024-07-15 16:41:49.496113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.252 [2024-07-15 16:41:49.496140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.252 qpair failed and we were unable to recover it. 00:25:10.252 [2024-07-15 16:41:49.496348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.252 [2024-07-15 16:41:49.496379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.252 qpair failed and we were unable to recover it. 00:25:10.252 [2024-07-15 16:41:49.496565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.252 [2024-07-15 16:41:49.496591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.252 qpair failed and we were unable to recover it. 00:25:10.252 [2024-07-15 16:41:49.496749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.252 [2024-07-15 16:41:49.496775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.252 qpair failed and we were unable to recover it. 00:25:10.252 [2024-07-15 16:41:49.496933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.252 [2024-07-15 16:41:49.496960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.252 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.497136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.497322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.497348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.497482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.497525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.497727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.497756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.497968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.497995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.498160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.498189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.498367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.498397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.498553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.498578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.498773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.498802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.498955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.498989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.499145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.499171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.499370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.499399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.499571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.499600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.499753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.499778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.499942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.499969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.500150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.500178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.500365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.500391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.500553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.500578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.500764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.500792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.500985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.501011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.501161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.501190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.501365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.501394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.501576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.501601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.501785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.501813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.501989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.502018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.502204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.502230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.502378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.502407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.502609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.502794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.502819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.503971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.503997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.504137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.504375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.504404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.504558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.504584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.504769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.504798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.504973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.505001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.505183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.505209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.505394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.505423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.505603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.505632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.505810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.505836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.506056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.506084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.506260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.506289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.506504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.506530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.506681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.506711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.506856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.506890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.507041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.507070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.507276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.507305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.507484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.507512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.507725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.507751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.507905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.507934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.508091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.508119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.508299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.508326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.508545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.508608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.508788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.508817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.509005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.509032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.509181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.509409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.509437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.509620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.509647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.253 [2024-07-15 16:41:49.509829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.253 [2024-07-15 16:41:49.509859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.253 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.510061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.510087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.510246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.510271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.510472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.510500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.510676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.510703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.510902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.510929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.511072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.511097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.511285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.511311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.511479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.511504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.511682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.511711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.511891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.511920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.512072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.512098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.512259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.512285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.512463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.512490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.512653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.512680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.512861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.512896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.513073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.513101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.513279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.513305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.513512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.513541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.513744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.513773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.513979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.514005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.514167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.514196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.514370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.514400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.514577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.514603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.514779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.514807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.514980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.515170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.515331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.515522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.515737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.515943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.515973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.516148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.516176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.516340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.516366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.516528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.516571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.516746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.516775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.516982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.517008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.517184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.517211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.517357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.517385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.517567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.517594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.517786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.517815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.518027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.518216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.518454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.518630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.518817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.518981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.519006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.519211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.519239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.519388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.519415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.519595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.519624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.519767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.519796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.519981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.520162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.520345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.520527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.520714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.520887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.520916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.521063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.521088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.521265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.521293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.521468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.521496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.521675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.521701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.521889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.521918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.522097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.522125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.522304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.522329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.522508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.254 [2024-07-15 16:41:49.522708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.254 [2024-07-15 16:41:49.522737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.254 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.522944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.522971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.523125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.523154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.523354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.523386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.523588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.523614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.523750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.523775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.523940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.523985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.524144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.524170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.524334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.524359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.524511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.524540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.524744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.524769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.524908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.524934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.525069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.525095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.525250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.525275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.525428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.525454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.525661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.525690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.525848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.525873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.526093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.526262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.526458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.526650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.526840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.526985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.527011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.527194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.527223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.527388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.527416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.527599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.527624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.527761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.527805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.528040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.528245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.528449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.528630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.528828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.528994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.529183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.529348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.529577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.529782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.529965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.529990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.530123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.530165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.530351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.530381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.530564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.530590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.530775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.530804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.531953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.531982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.532155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.532183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.532361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.532387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.532592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.532620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.532795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.532823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.533004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.533030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.533204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.533233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.533415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.533445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.533625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.255 [2024-07-15 16:41:49.533651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.255 qpair failed and we were unable to recover it. 00:25:10.255 [2024-07-15 16:41:49.533805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.533834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.534019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.534049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.534260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.534285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.534427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.534456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.534633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.534662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.534828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.534857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.535053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.535078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.535237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.535265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.535447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.535473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.535622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.535651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.535796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.535825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.536006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.536033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.536188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.536217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.536419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.536449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.536616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.536642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.536809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.536834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.537053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.537237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.537397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.537588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.537785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.537996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.538026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.538202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.538414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.538439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.538645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.538672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.538847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.538888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.539072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.539099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.539306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.539340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.539497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.539528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.539712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.539737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.539939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.539968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.540142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.540171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.540325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.540351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.540559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.540588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.540730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.540758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.540938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.540965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.541140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.541169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.541340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.541369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.541553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.541578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.541722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.541748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.541928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.541958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.542149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.542175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.542357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.542386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.542560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.542587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.542742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.542768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.542912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.542956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.543133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.543161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.543347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.543372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.543578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.543606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.543780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.543808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.544010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.544037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.544203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.544230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.544410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.544439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.544644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.544670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.544814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.544840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.545023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.545052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.545268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.545294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.545451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.545480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.545654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.545682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.545886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.545912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.546101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.546130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.546325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.546353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.546533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.546558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.546768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.256 [2024-07-15 16:41:49.546796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.256 qpair failed and we were unable to recover it. 00:25:10.256 [2024-07-15 16:41:49.546935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.546964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.547150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.547176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.547361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.547387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.547575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.547609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.547819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.547844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.547996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.548179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.548369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.548528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.548783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.548945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.548972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.549123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.549149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.549354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.549383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.549570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.549598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.549763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.549789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.549975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.550002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.550208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.550234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.550444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.550472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.550646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.550673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.550861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.550899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.551108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.551137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.551309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.551338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.551524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.551550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.551728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.551757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.551961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.551989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.552136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.552161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.552366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.552395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.552583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.552609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.552768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.552794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.552956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.552982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.553169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.553195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.553384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.553410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.553563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.553593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.553770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.553799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.553974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.554009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.554184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.554213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.554390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.554419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.554600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.554626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.554807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.554835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.554989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.555198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.555355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.555546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.555761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.555961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.555988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.556150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.556176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.556335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.556362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.556543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.556572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.556745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.556774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.556940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.556968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.557154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.557183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.557367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.557395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.557584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.557610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.557771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.557797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.558017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.558047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.558222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.558259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.558435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.558463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.558649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.558677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.558899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.558926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.559113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.559141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.559368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.559396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.559619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.559645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.559797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.559826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.560027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.560055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.560235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.257 [2024-07-15 16:41:49.560261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.257 qpair failed and we were unable to recover it. 00:25:10.257 [2024-07-15 16:41:49.560479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.560708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.560737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.560953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.560980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.561142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.561171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.561342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.561370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.561559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.561585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.561724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.561750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.561886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.561911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.562097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.562122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.562331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.562359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.562531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.562560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.562735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.562765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.562912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.562959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.563093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.563119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.563326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.563351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.563528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.563558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.563739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.563767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.563952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.563979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.564161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.564196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.564375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.564404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.564595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.564621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.564756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.564959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.564988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.565149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.565175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.565350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.565376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.565534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.565560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.565718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.565744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.565923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.565952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.566130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.566159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.566341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.566367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.566506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.566533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.566724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.566753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.566941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.566966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.567126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.567156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.567337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.567366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.567552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.567578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.567750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.567777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.567973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.568003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.568188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.568214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.568399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.568427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.568605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.568643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.568828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.568854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.569031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.569058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.569269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.569297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.569475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.569500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.569713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.569744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.569936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.569965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.570120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.570146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.570310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.570354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.570558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.570586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.570788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.570815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.571003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.571030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.571219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.571247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.571407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.571433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.571633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.571669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.571850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.571894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.572075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.572101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.572289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.572319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.258 qpair failed and we were unable to recover it. 00:25:10.258 [2024-07-15 16:41:49.572493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.258 [2024-07-15 16:41:49.572525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.572683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.572709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.572888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.572924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.573076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.573105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.573263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.573288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.573476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.573504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.573708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.573737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.573932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.573958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.574150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.574179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.574369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.574398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.574611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.574638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.574795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.574824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.575024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.575053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.575210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.575237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.575378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.575424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.575605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.575646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.575835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.575868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.576053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.576078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.576266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.576295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.576511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.576537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.576671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.576698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.576860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.576914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.577107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.577133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.577322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.577350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.577537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.577563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.577749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.577775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.577915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.577941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.578103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.578142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.578324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.578351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.578513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.578540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.578746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.578772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.578941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.578968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.579126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.579154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.579356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.579586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.579612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.579787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.579815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.579962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.580155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.580181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.580358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.580385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.580546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.580593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.580761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.580788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.580957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.580985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.581191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.581216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.581373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.581399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.581616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.581647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.581814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.581858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.582069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.582246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.582440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.582616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.582802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.582987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.583016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.583169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.583197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.583402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.583431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.583596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.583627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.583784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.583810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.584016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.584045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.584198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.584226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.584429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.584454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.584611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.584639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.584794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.584822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.585006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.585032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.585171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.585197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.585359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.585547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.259 [2024-07-15 16:41:49.585572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.259 qpair failed and we were unable to recover it. 00:25:10.259 [2024-07-15 16:41:49.585731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.585759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.585919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.585948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.586137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.586301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.586475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.586673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.586825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.586985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.587013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.587173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.587200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.587380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.587407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.587617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.587645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.587821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.587846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.587996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.588022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.588166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.588352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.588378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.588552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.588580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.588794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.588824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.588993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.589018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.589175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.589203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.589437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.589484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.589683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.589881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.589909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.590059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.590087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.590253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.590278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.590451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.590479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.590659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.590687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.590863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.590906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.591060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.591088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.591239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.591268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.591449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.591474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.591628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.591654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.591849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.591874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.592028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.592054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.592189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.592230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.592427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.592474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.592652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.592678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.592816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.592841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.593003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.593029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.593169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.593195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.593388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.593417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.593568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.593601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.593801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.594032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.594058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.594185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.594232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.594410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.594435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.594588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.594618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.594826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.594854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.595027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.595053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.595214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.595243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.595442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.595487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.595645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.595671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.595814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.595857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.596019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.596063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.596253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.596280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.596465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.596495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.596701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.596730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.596893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.596920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.597113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.597141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.597347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.597393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.597562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.597589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.597718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.597771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.597956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.597985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.598134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.598159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.598336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.260 [2024-07-15 16:41:49.598365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.260 qpair failed and we were unable to recover it. 00:25:10.260 [2024-07-15 16:41:49.598571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.598604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.598768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.598795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.598970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.598999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.599157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.599196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.599357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.599383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.599560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.599589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.599773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.599802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.599982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.600008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.600137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.600183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.600355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.600384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.600588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.600615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.600796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.600826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.600998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.601025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.601209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.601236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.601448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.601704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.601927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.601954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.602135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.602164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.602362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.602390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.602541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.602567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.602780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.602810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.602990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.603019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.603178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.603204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.603362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.603387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.603620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.603646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.603784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.261 [2024-07-15 16:41:49.603811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.261 qpair failed and we were unable to recover it. 00:25:10.261 [2024-07-15 16:41:49.604639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.604673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.604906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.604953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.605145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.605171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.605361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.605390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.605536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.605565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.605729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.605754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.605920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.605947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.606134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.606163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.606383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.606410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.606598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.606627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.606827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.606856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.607048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.607073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.607209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.607235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.607419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.607448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.607636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.607662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.607813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.607842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.608038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.608067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.608221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.608250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.608421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.608652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.608681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.608870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.609063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.609091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.609267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.609295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.609490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.609516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.609719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.609748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.609933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.609963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.610140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.610165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.610329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.610355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.610521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.610547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.610730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.610758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.610926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.610952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.611092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.611118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.611263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.611291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.611475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.611504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.611690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.611718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.611883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.611909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.612086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.612114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.612301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.612330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.612515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.612541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.612748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.612782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.612944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.612973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.613134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.613160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.613285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.613311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.613495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.613525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.613742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.613767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.613980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.614008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.614159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.614188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.614401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.614438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.614612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.614641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.614849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.614890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.615054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.615081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.615302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.615331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.615518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.615546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.615711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.615737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.615887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.615915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.616080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.616109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.616309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.616335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.616470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.616496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.616658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.616685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.616852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.616883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.617068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.617101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.617257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.617286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.617463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.617488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.617631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.262 [2024-07-15 16:41:49.617657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.262 qpair failed and we were unable to recover it. 00:25:10.262 [2024-07-15 16:41:49.617789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.617815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.617964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.617992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.618183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.618211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.618359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.618387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.618568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.618778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.618809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.618991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.619020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.619185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.619211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.619431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.619460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.619603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.619632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.619841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.619868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.620023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.620052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.620225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.620253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.620444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.620482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.620664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.620693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.620894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.620938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.621131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.621353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.621583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.621611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.621797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.621823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.621981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.622010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.622188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.622218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.622406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.622432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.622612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.622642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.622840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.622868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.623040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.623066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.623243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.623272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.623447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.623477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.623691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.623718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.623901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.623932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.624113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.624140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.624326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.624353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.624501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.624529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.624676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.624706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.624898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.624924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.625133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.625163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.625312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.625344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.625552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.625578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.625755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.625784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.625964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.625994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.626176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.626201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.626350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.626379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.626555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.626585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.626761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.626789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.626971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.626998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.627203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.627231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.627395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.627422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.627551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.627596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.627799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.627833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.628007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.628033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.628216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.628245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.628425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.628454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.628638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.628664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.628826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.628853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.629064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.629094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.629303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.629328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.629545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.629757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.629785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.629989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.630016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.630169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.630205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.630388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.630417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.630603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.630629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.630813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.630841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.631048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.631076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.631264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.631290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.263 [2024-07-15 16:41:49.631467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.263 [2024-07-15 16:41:49.631496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.263 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.631644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.631673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.631859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.631900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.632061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.632087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.632282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.632308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.632468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.632494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.632706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.632735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.632912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.632942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.633131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.633157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.633340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.633370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.633554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.633582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.633765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.633795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.633962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.633989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.634181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.634210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.634403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.634429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.634607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.634635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.634835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.634863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.635035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.635062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.635235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.635264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.635437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.635467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.635651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.635686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.635867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.635902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.636105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.636130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.636300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.636326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.636534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.636563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.636748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.636777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.636958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.636984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.637165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.637193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.637370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.637398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.637557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.637583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.637794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.637823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.638002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.638032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.638218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.638244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.638435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.638464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.638612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.638641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.638806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.638833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.639067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.639097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.639300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.639329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.639519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.639545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.639736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.639764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.639948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.639977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.640160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.640186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.640332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.640361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.640574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.640599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.640738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.640764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.640972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.641002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.641151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.641180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.641362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.641388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.641561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.641589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.641767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.641796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.642002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.642029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.642168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.642199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.642378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.642403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.642589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.642615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.642770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.642799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.643010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.643040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.643213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.643239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.643452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.643481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.643670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.643700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.643915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.264 [2024-07-15 16:41:49.643941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.264 qpair failed and we were unable to recover it. 00:25:10.264 [2024-07-15 16:41:49.644105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.644131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.644345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.644373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.644534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.644560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.644771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.644799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.644958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.644988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.645207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.645233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.645411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.645440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.645614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.645643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.646436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.646469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.646691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.646721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.647033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.647198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.647403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.647598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.647808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.647997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.648027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.648234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.648263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.648426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.648453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.648622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.648648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.648859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.648895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.649042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.649068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.649204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.649244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.649430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.649455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.649612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.649637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.649794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.649828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.650049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.650093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.650287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.650314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.650467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.650495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.650670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.650698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.650884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.650910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.651093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.651121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.651328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.651358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.651489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.651514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.651659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.651702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.651854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.651889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.652041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.652227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.652252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.652389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.652610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.652638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.652846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.652871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.653057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.653085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.653295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.653323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.653477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.653502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.653682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.653710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.653853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.653888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.654084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.654110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.654241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.654265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.654440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.654467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.654672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.654721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.654912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.654939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.655794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.655827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.656027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.656057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.656249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.656275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.656451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.656479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.656687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.656716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.656890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.656919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.657113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.657296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.657472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.657694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.657859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.657999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.658041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.658199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.658224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.658399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.658432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.658660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.658707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.658891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.265 [2024-07-15 16:41:49.658917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.265 qpair failed and we were unable to recover it. 00:25:10.265 [2024-07-15 16:41:49.659072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.659100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.659277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.659305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.659482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.659507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.659695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.659741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.659906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.659935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.660116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.660141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.660303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.660350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.660588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.660635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.660802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.660827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.660998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.661027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.661201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.661229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.661417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.661442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.661629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.661675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.661849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.661882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.662071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.662096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.662277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.662305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.662510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.662556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.662709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.662736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.662858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.662895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.663059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.663087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.663302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.663328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.663521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.663549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.663759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.663799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.663982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.664008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.664158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.664194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.664402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.664620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.664645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.664808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.665034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.665062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.665267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.665292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.665439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.665467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.665657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.665710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.665933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.665958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.666169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.666200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.666376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.666404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.666622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.666647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.666856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.667067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.667095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.667266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.667293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.667484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.667512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.667722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.667769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.667927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.667952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.668086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.668129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.668363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.668392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.668544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.668569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.668723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.668766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.668956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.668985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.669174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.669354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.669382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.669521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.669550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.669732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.669757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.669891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.669917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.670080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.670234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.670412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.670620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.670819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.670986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.671015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.671170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.671198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.671372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.671397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.671532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.671578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.671749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.266 [2024-07-15 16:41:49.671777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.266 qpair failed and we were unable to recover it. 00:25:10.266 [2024-07-15 16:41:49.671959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.671985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.672143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.672171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.672374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.672399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.672556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.672581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.672734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.672759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.672894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.672938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.673146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.673171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.673353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.673381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.673568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.673614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.673803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.673828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.673998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.674166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.674352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.674577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.674747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.674940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.674966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.675129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.675154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.675353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.675399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.675601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.675626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.675776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.675803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.675983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.676171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.676334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.676554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.676760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.676953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.676982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.677136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.677177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.677335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.677361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.677518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.677560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.677707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.677734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.677889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.677915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.678077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.678103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.678269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.678297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.678477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.678502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.678714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.678741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.678930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.678976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.679171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.679198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.679362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.679393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.679539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.679568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.679786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.679812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.679981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.680011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.680194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.680223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.680385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.680412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.680620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.680649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.680794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.680823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.680979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.681006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.681220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.681248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.681393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.681421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.681606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.681633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.681843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.681871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.682030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.682059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.682250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.682276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.682435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.682470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.682639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.682668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.683013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.683040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.683219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.683248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.683398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.683428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.683610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.683635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.683814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.683842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.684079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.684119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.684298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.684324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.267 [2024-07-15 16:41:49.684505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.267 [2024-07-15 16:41:49.684533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.267 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.684733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.684782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.684970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.684996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.685142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.685170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.685340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.685386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.685571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.685597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.685771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.685799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.686012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.686041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.686195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.686220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.686391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.686419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.686656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.686706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.686905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.686931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.687117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.687146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.687408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.687459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.687611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.687637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.687814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.687842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.688036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.688064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.688218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.688243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.688404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.688452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.688682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.688729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.688894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.688920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.689099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.689127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.689337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.689365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.689544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.689569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.689752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.689780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.689981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.690009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.690190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.690215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.690353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.690378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.690541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.690584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.690765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.690790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.690979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.691181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.691366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.691529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.691750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.691935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.691960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.692111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.692139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.692352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.692379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.692537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.692563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.692748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.692776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.692923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.692951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.693158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.693183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.693372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.693400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.693659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.693708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.693914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.693939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.694100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.694132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.694311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.694339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.694548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.694573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.694771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.694798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.694947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.694976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.695140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.695164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.695339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.695367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.695516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.695543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.695694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.695719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.695901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.695929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.696131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.696158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.696371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.696396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.696551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.696579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.696782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.696810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.696974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.697009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.697165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.697208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.697377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.697405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.697611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.697635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.697778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.697806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.697979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.698004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.698166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.268 [2024-07-15 16:41:49.698191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.268 qpair failed and we were unable to recover it. 00:25:10.268 [2024-07-15 16:41:49.698357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.698382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.698537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.698565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.698770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.698795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.698977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.699170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.699343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.699525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.699750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.699929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.699955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.700109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.700151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.700353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.700381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.700554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.700579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.700778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.700805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.700991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.701019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.701234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.701259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.701444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.701473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.701678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.701706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.701913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.701939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.702120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.702147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.702314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.702342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.702533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.702558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.702741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.702768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.702971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.703173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.703332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.703515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.703694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.703934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.703960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.704125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.704150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.704340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.704365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.704542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.704572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.704744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.704771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.704946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.704972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.705152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.705180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.705363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.705390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.705567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.705592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.705762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.705790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.705943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.705971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.706132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.706158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.706336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.706364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.706534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.706561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.706738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.706763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.706974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.707195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.707343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.707548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.707730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.707936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.707966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.708131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.708156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.708333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.708544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.708728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.708753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.708908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.708937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.709112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.709137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.709305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.709332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.709503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.709531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.709716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.709741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.709954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.269 [2024-07-15 16:41:49.709983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.269 qpair failed and we were unable to recover it. 00:25:10.269 [2024-07-15 16:41:49.710167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.710348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.710373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.710520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.710548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.710767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.710795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.710943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.710969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.711111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.711264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.711289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.711448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.711473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.711658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.711686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.711853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.712053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.712078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.712233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.712275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.712473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.712628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.712653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.712811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.712837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.713047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.713073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.713199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.713228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.713389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.713415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.713544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.713568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.713764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.713789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.714948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.714974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.715181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.715209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.715389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.715417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.715597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.715623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.715780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.715807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.715989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.716191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.716347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.716540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.716718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.716937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.716965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.717142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.717170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.717358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.717383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.717534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.717563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.717717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.717745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.717898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.717923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.718062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.718087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.718215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.718241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.718427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.718452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.718634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.718662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.718867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.718901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.719140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.719320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.719485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.719687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.719842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.719998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.720024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.720179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.720204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.720381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.720409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.720612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.720640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.720782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.720807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.720983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.721218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.721400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.721565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.721776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.721960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.722123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.722165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.722338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.722366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.722516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.270 [2024-07-15 16:41:49.722542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.270 qpair failed and we were unable to recover it. 00:25:10.270 [2024-07-15 16:41:49.722748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.722776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.722954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.722982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.723164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.723189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.723397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.723596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.723624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.723768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.723793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.723938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.723964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.724126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.724151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.724323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.724351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.724526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.724551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.724714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.724739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.724880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.724923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.725092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.725119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.725271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.725296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.725429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.725649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.725674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.725835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.725860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.726022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.726046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.726286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.726338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.726516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.726547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.726728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.726756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.726931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.726957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.727134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.727161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.727362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.727387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.727547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.727588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.727758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.727782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.727962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.727990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.728197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.728223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.728397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.728425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.728616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.728641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.728816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.728844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.729022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.729050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.729201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.729413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.729438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.729642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.729849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.729881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.730073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.730097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.730235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.730259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.730465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.730493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.730668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.730695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.730863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.730910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.731073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.731098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.731304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.731332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.731518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.731546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.731732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.731760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.731968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.731994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.732150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.732181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.732322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.732350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.732527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.732555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.732712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.732737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.732880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.732922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.733122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.733150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.733292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.733320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.733464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.733489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.733647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.733689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.733863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.733896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.734066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.734094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.734252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.734277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.734455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.734483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.734694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.734719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.734855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.734896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.735062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.735088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.735267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.735295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.735474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.735501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.271 [2024-07-15 16:41:49.735675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.271 [2024-07-15 16:41:49.735703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.271 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.735854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.735887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.736016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.736057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.736234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.736262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.736436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.736464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.736613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.736638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.736814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.737005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.737030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.737155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.737195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.737373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.737402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.737615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.737643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.737824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.737852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.738012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.738041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.738203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.738228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.738442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.738470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.738648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.738676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.738844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.738872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.739037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.739062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.739187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.739229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.739432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.739460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.739627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.739655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.739843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.739867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.740020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.740047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.740265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.740290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.740459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.740669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.740694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.740897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.740925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.741099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.741127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.741296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.741324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.741500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.741526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.741736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.741799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.741962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.741990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.742169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.742196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.742377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.742403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.742526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.742567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.742745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.742770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.742928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.742971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.743164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.743189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.743453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.743507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.743691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.743716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.743883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.743909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.744039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.744066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.744204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.744229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.744425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.744451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.744640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.744668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.744852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.744882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.745064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.745092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.745268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.745296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.745476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.745504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.745691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.745716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.745752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16360e0 (9): Bad file descriptor 00:25:10.272 [2024-07-15 16:41:49.745994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.746038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.746260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.746289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.746474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.746499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.746795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.746847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.747011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.747038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.747206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.747231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.747397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.747422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.747625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.747652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.272 qpair failed and we were unable to recover it. 00:25:10.272 [2024-07-15 16:41:49.747845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.272 [2024-07-15 16:41:49.747870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.748012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.748037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.748198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.748225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.748382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.748407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.748591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.748650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.748859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.748894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.749049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.749075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.749237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.749262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.749415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.749443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.749628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.749653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.749864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.749899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.750100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.750291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.750490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.750663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.750848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.750988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.751014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.751220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.751247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.751430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.751459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.751637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.751666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.751866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.751900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.752102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.752308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.752483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.752671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.752821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.752988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.753142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.753297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.753479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.753717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.753913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.753956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.754155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.754195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.754362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.754388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.754524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.754566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.754747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.754775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.754921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.754947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.755104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.755315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.755344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.755495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.755521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.755658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.755701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.755880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.755909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.756059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.756085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.756288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.756316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.756493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.756521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.756706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.756731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.756887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.756932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.757102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.757127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.757290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.757315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.757478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.757504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.757705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.757733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.757909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.757935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.758073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.758100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.758295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.758324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.758500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.758525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.758709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.758737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.758911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.758940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.759151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.759176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.759358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.759392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.759560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.759588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.759789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.759818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.759998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.760024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.760178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.760203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.760363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.760388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.760571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.760599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.273 [2024-07-15 16:41:49.760760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.273 [2024-07-15 16:41:49.760785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.273 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.760943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.760968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.761175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.761203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.761354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.761382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.761542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.761569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.761703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.761744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.761924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.761954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.762117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.762142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.762348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.762376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.762550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.762579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.762756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.762782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.762956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.762984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.763167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.763195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.763368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.763394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.763577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.763604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.763778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.763806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.763986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.764146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.764322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.764471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.764659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.764859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.764892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.765044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.765069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.765206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.765247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.765422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.765450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.765601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.765626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.765803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.765832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.766008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.766037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.766241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.766267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.766479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.766507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.766685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.766714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.766871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.766909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.767088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.767116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.767293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.767326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.767510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.767535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.767725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.767750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.767966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.767995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.768179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.768204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.768367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.768392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.768547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.768573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.768757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.768785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.768976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.769174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.769363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.769562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.769755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.769959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.769985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.770205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.770233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.770385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.770413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.770595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.770620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.770789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.770817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.770993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.771204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.771375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.771547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.771760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.771968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.771998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.772208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.772236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.772415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.772440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.772574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.772615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.772783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.772827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.773029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.773057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.773243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.773273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.773447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.773494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.773710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.274 [2024-07-15 16:41:49.773736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.274 qpair failed and we were unable to recover it. 00:25:10.274 [2024-07-15 16:41:49.773899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.773926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.774139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.774167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.774326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.774352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.774529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.774558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.774757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.774786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.774970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.774996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.775209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.775465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.775515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.775700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.775732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.775881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.775925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.776108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.776137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.776285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.776311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.776484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.776513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.776721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.776750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.776935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.776960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.777120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.777148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.777327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.777356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.777517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.777542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.777700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.777725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.777910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.777940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.778123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.778289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.778318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.778602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.778657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.778827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.778856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.779022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.779047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.779196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.779225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.779402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.779429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.779602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.779631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.779839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.779865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.780038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.780215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.780389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.780601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.780785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.780962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.781005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.781181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.781208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.781371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.781415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.781617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.781663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.781873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.781904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.782056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.782083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.782347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.782396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.782578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.782604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.782753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.782783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.782964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.782995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.783179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.783205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.783379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.783406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.783720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.783771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.783929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.783956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.784088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.784134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.784423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.784481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.784666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.784692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.784827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.784871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.785054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.785083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.785263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.785289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.785473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.785500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.785652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.785681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.785860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.785894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.786105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.786134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.786369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.786398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.275 [2024-07-15 16:41:49.786581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.275 [2024-07-15 16:41:49.786608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.275 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.786792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.786821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.787032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.787191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.787383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.787596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.787805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.787987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.788017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.788226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.788255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.788414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.788440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.788609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.788635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.788843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.788892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.789107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.789133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.789315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.789344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.789600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.789650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.789812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.789838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.790033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.790059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.790253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.790301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.790461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.790488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.790698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.790726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.790948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.790993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.791187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.791214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.791425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.791455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.791718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.791744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.791905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.791931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.792073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.792100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.792263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.792290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.792450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.792476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.792685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.792713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.792900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.792926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.793141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.793306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.793465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.793663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.793828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.793990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.794033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.794184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.794211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.794366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.794408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.794593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.794621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.794776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.794804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.794991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.795149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.795356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.795519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.795728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.795919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.795945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.796073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.796114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.796292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.796320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.796526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.796551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.796733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.796761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.796969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.796998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.797177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.797202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.797408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.797436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.797605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.797632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.797839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.797864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.798062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.798090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.798242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.798278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.798463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.798489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.798648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.798689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.798883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.798912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.799089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.799114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.799298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.799326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.799538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.799588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.799776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.799801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.799948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.799973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.276 qpair failed and we were unable to recover it. 00:25:10.276 [2024-07-15 16:41:49.800155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.276 [2024-07-15 16:41:49.800183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.800361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.800386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.800547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.800572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.800733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.800758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.800944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.800978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.801195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.801223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.801431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.801459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.801642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.801668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.801851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.801886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.802066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.802094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.802265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.802290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.802467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.802497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.802675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.802700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.802890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.802919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.803097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.803123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.803280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.803306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.803470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.803495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.803677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.803705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.803896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.803926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.804075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.804100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.804259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.804303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.804628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.804678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.804888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.804915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.805103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.805131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.805300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.805329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.805503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.805529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.805668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.805695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.805860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.805893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.806058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.806083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.806268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.806296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.806439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.806468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.806677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.806707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.806893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.807100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.807128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.807321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.807346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.807490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.807517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.807663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.807691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.807884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.807909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.808063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.808091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.808230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.808258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.808467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.808492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.808670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.808697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.808869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.808902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.809084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.809109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.809285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.809313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.809500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.809526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.809683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.809708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.809895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.809923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.810062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.810254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.810279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.810463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.810489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.810702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.810748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.810932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.810959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.811137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.811165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.811342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.811368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.811526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.811552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.811722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.277 qpair failed and we were unable to recover it. 00:25:10.277 [2024-07-15 16:41:49.811956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.277 [2024-07-15 16:41:49.811981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.812170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.812196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.812350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.812379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.812584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.812612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.812765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.812790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.812965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.812994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.813164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.813192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.813353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.813380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.813505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.813721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.813750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.813960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.813986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.814159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.814187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.814375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.814403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.814608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.814633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.814811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.814843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.815000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.815029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.815190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.815215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.815373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.815398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.815553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.815582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.815790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.815815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.816022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.816050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.816224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.816253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.816429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.816454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.816664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.816692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.816835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.816862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.817058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.817243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.817268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.817453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.817481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.817641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.817668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.817849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.817895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.818079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.818107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.818288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.818313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.818486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.818514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.818716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.818744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.818932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.818958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.819168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.819196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.819385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.819410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.819566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.819592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.819749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.819774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.819993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.820022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.820208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.820233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.820411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.820439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.820638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.820667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.820871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.820904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.821050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.821075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.821256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.821284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.821475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.821500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.821677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.821705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.821858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.821890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.822070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.822095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.822302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.822330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.822481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.822509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.822688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.822714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.822889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.822917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.823096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.823128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.823278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.823303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.823468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.823493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.823675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.823704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.823911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.823937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.824082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.824110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.824333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.824358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.824512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.824537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.824679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.824707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.824857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.824897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.825062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.825087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.825221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.825247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.278 qpair failed and we were unable to recover it. 00:25:10.278 [2024-07-15 16:41:49.825426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.278 [2024-07-15 16:41:49.825454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.825608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.825635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.825844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.825873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.826061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.826089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.826244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.826269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.826477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.826505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.826729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.826757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.826961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.826986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.827197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.827225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.827366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.827394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.827548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.827574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.827747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.827775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.279 [2024-07-15 16:41:49.827943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.279 [2024-07-15 16:41:49.827969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.279 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.828109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.828136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.828341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.828369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.828580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.828608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.828794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.828819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.829025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.829053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.829237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.829266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.829442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.829467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.829591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.829634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.829808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.829836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.830000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.830026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.830199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.830226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.830429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.830455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.830635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.830806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.830834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.831043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.831202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.831384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.831593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.831773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.831980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.832146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.832363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.832535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.832704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.832892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.832918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.833048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.833090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.833264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.833292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.833476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.833501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.833679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.833708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.833889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.833918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.558 qpair failed and we were unable to recover it. 00:25:10.558 [2024-07-15 16:41:49.834094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.558 [2024-07-15 16:41:49.834119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.834298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.834326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.834500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.834528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.834706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.834732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.834874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.834909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.835086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.835114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.835261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.835286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.835484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.835512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.835690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.835718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.835888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.835916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.836094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.836119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.836311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.836339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.836523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.836548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.836746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.836774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.836962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.836987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.837144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.837169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.837350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.837378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.837586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.837614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.837817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.837842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.838008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.838034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.838240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.838268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.838451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.838476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.838650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.838678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.838856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.838889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.839071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.839095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.839259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.839291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.839478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.839504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.839693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.839888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.839915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.840105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.840130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.840322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.840347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.840535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.840562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.840740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.840770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.840931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.840957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.841161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.841189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.841363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.841390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.841582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.841607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.841740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.841766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.841949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.841975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.842145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.842170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.842375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.842403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.842572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.842600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.842779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.842804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.843034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.843229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.843426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.843635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.843998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.844025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.844229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.844257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.844432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.844462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.844611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.844636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.844811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.844839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.844994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.845023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.845226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.845251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.845423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.845451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.845660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.845685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.845818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.845843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.846022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.846051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.846202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.846231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.846410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.846624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.846652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.846857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.846891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.847055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.847080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.847283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.847310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.559 [2024-07-15 16:41:49.847458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.559 [2024-07-15 16:41:49.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.559 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.847699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.847725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.847906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.847947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.848077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.848103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.848287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.848312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.848457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.848485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.848664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.848691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.848867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.848900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.849063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.849091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.849271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.849299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.849451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.849477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.849658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.849686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.849828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.849856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.850044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.850069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.850248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.850276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.850411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.850438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.850616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.850642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.850824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.850852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.851037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.851066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.851219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.851246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.851460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.851488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.851694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.851721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.851897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.851923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.852102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.852130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.852336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.852361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.852544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.852569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.852750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.852778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.852925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.852954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.853178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.853203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.853410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.853439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.853620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.853648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.853854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.853884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.854096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.854124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.854328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.854356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.854506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.854531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.854697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.854722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.854902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.854929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.855089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.855115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.855283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.855311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.855484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.855512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.855699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.855728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.855886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.855915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.856091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.856271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.856296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.856494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.856521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.856711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.856739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.856900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.856926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.857082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.857124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.857292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.857320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.857503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.857528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.857739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.857767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.857978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.858165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.858327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.858510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.858673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.858881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.858910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.859056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.859084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.859241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.859266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.859452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.859478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.859676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.859701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.859856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.859888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.860070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.860098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.860269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.860297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.860475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.860500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.860705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.860733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.560 [2024-07-15 16:41:49.860897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.560 [2024-07-15 16:41:49.860926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.560 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.861084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.861110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.861281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.861309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.861489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.861515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.861676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.861701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.861896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.861922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.862116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.862144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.862325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.862350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.862515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.862540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.862701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.862726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.862886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.862912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.863067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.863096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.863272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.863301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.863481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.863507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.863668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.863697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.863841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.863866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.864039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.864064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.864221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.864246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.864416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.864441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.864623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.864829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.864857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.865042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.865070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.865227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.865252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.865416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.865441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.865579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.865604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.865786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.865815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.866028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.866264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.866474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.866662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.866851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.866990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.867173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.867381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.867563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.867744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.867927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.867956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.868140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.868343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.868371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.868548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.868577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.868756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.868781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.868970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.869172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.869370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.869531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.869760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.869940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.869966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.870160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.870185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.870351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.870379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.870558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.870583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.870759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.870787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.870968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.870996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.871176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.871201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.871411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.871438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.871586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.871619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.871801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.871826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.871966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.871992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.872155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.872181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.561 qpair failed and we were unable to recover it. 00:25:10.561 [2024-07-15 16:41:49.872346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.561 [2024-07-15 16:41:49.872371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.872553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.872582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.872727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.872755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.872939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.872965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.873123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.873149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.873344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.873372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.873589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.873614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.873764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.873793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.873941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.873971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.874152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.874177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.874313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.874339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.874520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.874545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.874705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.874734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.874947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.874973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.875127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.875169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.875387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.875412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.875622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.875650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.875854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.875888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.876062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.876087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.876267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.876295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.876437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.876465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.876674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.876700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.876907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.876939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.877144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.877172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.877379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.877404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.877604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.877632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.877815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.877843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.878033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.878058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.878269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.878297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.878453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.878483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.878659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.878684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.878835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.878863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.879083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.879241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.879440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.879644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.879842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.879995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.880021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.880191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.880219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.880379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.880405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.880608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.880636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.880823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.880851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.881043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.881244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.881424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.881623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.881820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.881976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.882006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.882191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.882217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.882397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.882426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.882611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.882639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.882819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.882844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.882995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.883021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.883238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.883266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.883438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.883463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.883592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.883813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.884060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.884086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.884263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.884291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.884464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.884492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.884657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.884682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.884867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.884902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.885104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.885129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.885327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.885353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.885568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.885596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.885803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.885830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.885994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.886020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.886152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.886194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.886379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.886407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.886561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.886586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.886796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.886824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.887004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.562 [2024-07-15 16:41:49.887032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.562 qpair failed and we were unable to recover it. 00:25:10.562 [2024-07-15 16:41:49.887210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.887235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.887412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.887440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.887581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.887609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.887769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.887795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.887932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.887981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.888155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.888183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.888364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.888389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.888572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.888600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.888752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.888780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.888963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.888990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.889199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.889228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.889424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.889449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.889583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.889609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.889791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.889819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.889962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.889991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.890201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.890226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.890410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.890438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.890639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.890668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.890835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.890860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.891045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.891073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.891255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.891283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.891468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.891493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.891634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.891662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.891801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.892833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.892858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.893937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.893966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.894116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.894144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.894287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.894314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.894517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.894544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.894711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.894752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.894923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.894951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.895116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.895141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.895321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.895365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.895580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.895622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.895840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.895896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.896069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.896095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.896277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.896320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.896508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.896552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.896713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.896757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.896897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.896929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.897141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.897184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.897442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.897496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.897707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.897749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.897937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.897964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.898141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.898188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.898369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.898413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.898604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.898632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.898810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.898836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.899021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.899065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.899277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.899319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.899643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.899696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.899854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.899884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.900071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.900115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.900281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.900325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.900525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.900568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.563 qpair failed and we were unable to recover it. 00:25:10.563 [2024-07-15 16:41:49.900750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.563 [2024-07-15 16:41:49.900775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.900953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.900996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.901183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.901226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.901413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.901647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.901672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.901857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.901888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.902070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.902117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.902311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.902354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.902534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.902582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.902744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.902769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.902947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.902992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.903204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.903247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.903433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.903476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.903666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.903692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.903855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.903886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.904066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.904109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.904295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.904338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.904520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.904563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.904724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.904749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.904931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.904984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.905150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.905194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.905383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.905427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.905639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.905683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.905812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.905837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.906054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.906099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.906291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.906335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.906499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.906542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.906702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.906727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.906886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.906930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.907091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.907133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.907346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.907389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.907590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.907636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.907796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.907822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.908040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.908085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.908232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.908274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.908486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.908527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.908725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.908750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.908895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.908921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.909109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.909152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.909337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.909381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.909536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.909579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.909769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.909794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.909981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.910025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.910243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.910286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.910504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.910546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.910683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.910708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.910900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.910930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.911144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.911172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.911406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.911434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.911638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.911683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.911846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.911872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.912043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.912069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.912253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.912296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.912486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.912666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.912691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.912855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.912887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.913076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.913102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.913284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.913326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.913505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.913673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.913698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.913866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.913902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.914061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.914105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.914324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.914366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.914553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.914597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.914790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.914816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.914966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.914992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.915175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.915219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.915397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.915624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.915669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.915836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.915861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.916027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.916071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.916260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.916303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.916456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.916498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.916665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.916691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.564 [2024-07-15 16:41:49.916819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.564 [2024-07-15 16:41:49.916845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.564 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.917007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.917053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.917277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.917320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.917512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.917555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.917739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.917765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.917934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.917963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.918200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.918243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.918375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.918402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.918591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.918616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.918782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.918808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.918992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.919036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.919223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.919267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.919488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.919535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.919695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.919720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.919862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.919894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.920054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.920098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.920284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.920327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.920520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.920563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.920706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.920731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.920897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.920940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.921146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.921188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.921375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.921417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.921570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.921614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.921775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.921800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.921982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.922026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.922212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.922255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.922442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.922485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.922683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.922709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.922867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.922899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.923052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.923096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.923274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.923317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.923530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.923573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.923763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.923789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.923920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.923947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.924136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.924180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.924366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.924409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.924596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.924639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.924799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.924825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.925010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.925054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.925271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.925313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.925503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.925547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.925684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.925711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.925866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.925897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.926111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.926154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.926346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.926389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.926602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.926645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.926807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.926832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.927036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.927062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.927268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.927311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.927505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.927531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.927697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.927723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.927912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.927939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.928125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.928171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.928355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.928398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.928546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.928589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.928749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.928775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.928954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.928998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.929148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.929193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.929402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.929445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.929637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.929663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.929800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.929826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.930064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.930286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.930330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.930519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.930562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.930748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.930774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.930925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.930954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.931192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.931235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.931429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.931472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.931674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.931700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.931856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.931886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.932095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.565 [2024-07-15 16:41:49.932137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.565 qpair failed and we were unable to recover it. 00:25:10.565 [2024-07-15 16:41:49.932323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.932366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.932518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.932561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.932747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.932772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.932925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.932955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.933152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.933196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.933377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.933420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.933638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.933679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.933833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.933859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.934054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.934098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.934286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.934329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.934545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.934588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.934733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.934759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.934936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.934965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.935141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.935169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.935335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.935378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.935559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.935601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.935785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.935810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.935971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.935998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.936212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.936256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.936419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.936461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.936649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.936674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.936855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.936891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.937074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.937118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.937297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.937343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.937534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.937578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.937735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.937761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.937970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.938014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.938215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.938409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.938452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.938642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.938685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.938849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.938874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.939097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.939140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.939308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.939350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.939563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.939606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.939770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.939796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.939990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.940016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.940201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.940244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.940460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.940502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.940662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.940705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.940869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.940901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.941062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.941087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.941297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.941339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.941489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.941532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.941720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.941763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.941944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.941973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.942174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.942216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.942389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.942416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.942625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.942668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.942839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.942865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.943047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.943091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.943278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.943322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.943504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.943547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.943740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.943973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.944015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.944209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.944238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.944391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.944417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.944579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.944604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.944764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.944790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.944978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.945021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.945210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.945253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.945439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.945482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.945674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.566 [2024-07-15 16:41:49.945703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.566 qpair failed and we were unable to recover it. 00:25:10.566 [2024-07-15 16:41:49.945859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.945891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.946070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.946113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.946328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.946370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.946547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.946756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.946781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.946940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.946984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.947165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.947208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.947369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.947412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.947600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.947626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.947785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.947811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.947998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.948041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.948197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.948241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.948451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.948494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.948664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.948690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.948831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.948857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.949016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.949059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.949247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.949506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.949548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.949715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.949740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.949945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.949989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.950205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.950248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.950438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.950481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.950663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.950689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.950854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.950885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.951044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.951088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.951302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.951344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.951564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.951606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.951739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.951765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.951899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.951925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.952110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.952156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.952368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.952411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.952612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.952655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.952797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.952822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.953040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.953069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.953264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.953308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.953503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.953531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.953684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.953711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.953844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.953871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.954094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.954137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.954322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.954381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.954594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.954636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.954818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.954844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.955030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.955073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.955293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.955335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.955524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.955566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.955741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.955767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.955976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.956019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.956232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.956274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.956428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.956470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.956658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.956700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.956888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.956914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.957098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.957123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.957266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.957310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.957479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.957690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.957733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.957979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.958008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.958201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.958248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.958439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.958482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.958693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.958722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.958951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.958995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.959187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.959231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.959384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.959427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.959607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.959650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.959836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.959862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.960059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.960103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.960260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.960303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.960491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.960535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.960728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.960753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.960929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.960958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.567 [2024-07-15 16:41:49.961184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.567 [2024-07-15 16:41:49.961226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.567 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.961408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.961455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.961668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.961712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.961888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.961917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1621458 Killed "${NVMF_APP[@]}" "$@" 00:25:10.568 [2024-07-15 16:41:49.962106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.962151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.962340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.962384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.962598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.962642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.962806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.962831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:10.568 [2024-07-15 16:41:49.963024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.963052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.568 [2024-07-15 16:41:49.963266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.963311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.568 [2024-07-15 16:41:49.963503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.963533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.963705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.963732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.963927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.963954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.964138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.964181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.964393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.964436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.964627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.964671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.964803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.964829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.965011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.965056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.965248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.965276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.965421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.965448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.965660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.965702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.965841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.965872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.966070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.966114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.966296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.966341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.966552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.966595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.966728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.966753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.966929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.966974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.967168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.967196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.967401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.967445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.967608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.967790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.967815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.967999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.968043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.968230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.968272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.968485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.968528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.968692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.968718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.968888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.968914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.969105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1622013 00:25:10.568 [2024-07-15 16:41:49.969149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:10.568 [2024-07-15 16:41:49.969332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1622013 00:25:10.568 [2024-07-15 16:41:49.969377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.969589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1622013 ']' 00:25:10.568 [2024-07-15 16:41:49.969633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.568 [2024-07-15 16:41:49.969796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.969822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.568 [2024-07-15 16:41:49.969982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.970009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.568 [2024-07-15 16:41:49.970163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.970206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.568 16:41:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.568 [2024-07-15 16:41:49.970424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.970468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.970662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.970710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.970885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.970912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.971106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.971133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.971321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.971365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.971521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.971566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.971753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.971779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.971965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.971991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.972176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.972218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.972430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.972474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.972650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.972692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.972850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.972892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.973061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.568 [2024-07-15 16:41:49.973087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.568 qpair failed and we were unable to recover it. 00:25:10.568 [2024-07-15 16:41:49.973272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.973315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.973526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.973568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.973710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.973928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.973972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.974152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.974196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.974387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.974431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.974626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.974652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.974815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.974840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.975028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.975071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.975263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.975307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.975516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.975560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.975700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.975727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.975908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.975935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.976119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.976164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.976354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.976397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.976567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.976610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.976798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.976824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.977013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.977056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.977231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.977276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.977458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.977501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.977666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.977692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.977857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.977892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.978109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.978153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.978312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.978508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.978554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.978722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.978747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.978927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.978956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.979123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.979168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.979387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.979435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.979613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.979658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.979819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.979845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.980046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.980094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.980281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.980310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.980539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.980582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.980770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.980795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.980975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.981020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.981173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.981220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.981404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.981447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.981582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.981609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.981800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.981826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.982027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.982071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.982285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.982314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.982474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.982502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.982741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.982793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.982959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.982985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.983141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.983184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.983392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.983420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.983636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.983663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.983815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.983843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.984025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.984051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.984212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.984242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.984421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.984450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.984853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.984926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.985112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.985138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.985332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.985360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.985556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.985588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.985771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.985799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.985990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.986152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.986363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.986592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.986762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.986974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.986999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.987193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.987218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.987429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.987475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.987677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.987705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.987885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.987928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.988055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.988081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.988262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.988290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.988461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.988489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.988709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.569 [2024-07-15 16:41:49.988774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.569 qpair failed and we were unable to recover it. 00:25:10.569 [2024-07-15 16:41:49.988930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.988956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.989120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.989145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.989286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.989312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.989442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.989484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.989633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.989662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.989888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.989931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.990095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.990120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.990332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.990360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.990630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.990680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.990849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.990882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.991040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.991065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.991248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.991273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.991462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.991490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.991666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.991694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.991851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.991882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.992049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.992074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.992261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.992289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.992488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.992694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.992722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.992896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.992939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.993076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.993101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.993250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.993279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.993432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.993460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.993658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.993686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.993870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.993902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.994047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.994073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.994231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.994256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.994466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.994494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.994635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.994663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.994866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.994903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.995109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.995134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.995299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.995326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.995581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.995640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.995828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.996099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.996126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.996267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.996292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.996475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.996503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.996648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.996676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.996860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.996895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.997111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.997139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.997346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.997372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.997500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.997525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.997702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.997729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.997909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.997938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.998094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.998119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.998282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.998326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.998467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.998494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.998673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.998698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.998851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.998891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.999043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.999071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.999248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.999274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.999426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.999454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.999654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.999683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:49.999846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:49.999871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.000034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.000062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.000210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.000238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.000415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.000440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.000620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.000649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.000806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.000833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.001017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.001240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.001269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.001453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.001479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.001636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.001661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.001838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.001866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.002057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.002085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.002252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.002278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.002469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.002497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.002669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.002698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.002865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.002901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.570 [2024-07-15 16:41:50.003058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.570 [2024-07-15 16:41:50.003084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.570 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.003275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.003303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.003492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.003517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.003706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.003734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.003911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.003940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.004099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.004268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.004311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.004489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.004517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.004729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.004758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.004911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.004941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.005095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.005128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.005325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.005351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.005489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.005514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.005681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.005710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.007259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.007300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.007468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.007497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.007637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.007679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.007843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.007869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.008039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.008065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.008245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.008274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.008421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.008450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.008634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.008659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.008843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.008889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.009073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.009244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.009446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.009675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.009851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.009986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.010213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.010422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.010580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.010771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.010933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.010959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.011088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.011131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.011300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.011330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.011517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.011719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.011752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.011906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.011934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.012142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.012167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.012371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.012399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.012572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.012601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.012771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.012799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.012986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.013149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.013404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.013581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.013762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.013970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.013995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.014149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.014178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.014374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.014402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.014583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.014609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.014819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.014847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.015037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.015063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.015233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.015259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.015464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.015492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.015699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.015727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.015936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.015962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.016139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.016167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.016344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.016372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.016530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.016556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.016692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.016717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.571 [2024-07-15 16:41:50.016905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.571 [2024-07-15 16:41:50.016931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.571 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.017112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.017138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.017316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.017345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.017534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.017563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.017743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.017768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.017942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.017968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.018098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.018124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.018313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.018339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.018519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.018548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.018753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.018781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.018991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.019018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.019166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.019197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.019395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.019423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.019594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.019620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.019797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.019826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.020039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.020228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.020429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.020627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.020826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 [2024-07-15 16:41:50.020802] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.020908] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.572 [2024-07-15 16:41:50.020960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.021177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.021344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.021545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.021748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.021964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.021991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.022167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.022198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.022394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.022424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.022577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.022606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.022733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.022775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.022984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.023015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.023175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.023204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.023383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.023413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.023604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.023633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.023806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.023832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.024032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.024216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.024245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.024431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.024457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.024644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.024673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.024819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.024848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.025072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.025312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.025341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.025520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.025549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.025708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.025735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.025898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.025927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.026103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.026132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.026315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.026494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.026522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.026709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.026738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.026913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.026940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.027126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.027155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.027332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.027361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.027546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.027572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.027778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.027807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.027990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.028016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.028204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.028230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.028419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.028448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.028623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.028652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.028830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.028856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.029042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.029071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.029272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.029301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.029457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.029483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.029661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.029690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.029860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.029905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.030095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.030120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.030293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.030322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.030494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.030523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.030703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.030729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.031046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.572 [2024-07-15 16:41:50.031076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.572 qpair failed and we were unable to recover it. 00:25:10.572 [2024-07-15 16:41:50.031313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.031347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.031510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.031537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.031712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.031741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.031919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.031948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.032136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.032161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.032301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.032327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.032509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.032538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.032716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.032741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.032920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.032950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.033131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.033159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.033344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.033369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.033549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.033577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.033746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.033774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.033956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.033984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.034143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.034187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.034397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.034425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.034573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.034599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.034790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.034819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.035008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.035034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.035197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.035223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.035396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.035425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.035626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.035654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.035810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.035837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.036018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.036047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.036226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.036254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.036433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.036459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.036642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.036670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.036840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.036872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.037033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.037059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.037223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.037265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.037410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.037439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.037595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.037622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.037798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.037826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.038938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.038965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.039154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.039182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.039329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.039355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.039569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.039598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.039795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.039998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.040159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.040375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.040563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.040751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.040941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.040970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.041133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.041159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.041326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.041352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.041539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.041567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.041752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.041778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.041933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.041959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.042122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.042155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.042337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.042365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.042509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.042537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.042707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.042736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.042925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.042951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.043101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.043130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.043341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.043370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.043521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.043547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.043714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.573 [2024-07-15 16:41:50.043740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.573 qpair failed and we were unable to recover it. 00:25:10.573 [2024-07-15 16:41:50.043881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.044109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.044136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.044319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.044347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.044501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.044683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.044709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.044871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.044921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.045122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.045151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.045346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.045372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.045535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.045562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.045745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.045774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.045941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.045968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.046105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.046131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.046315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.046345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.046532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.046558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.046682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.046724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.046888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.046918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.047073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.047100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.047269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.047313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.047485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.047514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.047697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.047723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.047866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.047899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.048066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.048268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.048478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.048648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.048824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.048968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.049011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.049208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.049236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.049445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.049472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.049618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.049647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.049821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.049849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.050011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.050037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.050208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.050253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.050464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.050496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.050673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.050700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.050868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.050923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.051094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.051122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.051307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.051335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.051485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.051512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.051680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.051709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.051890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.051918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.052070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.052097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.052237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.052440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.052467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.052642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.052837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.052871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.053086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.053113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.053272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.053300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.053504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.053692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.053719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.053889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.053916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.054940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.054967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.055093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.055120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.055258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.055286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.055455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.055481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.055654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.055679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.055841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.055868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.056044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.574 [2024-07-15 16:41:50.056229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.056419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.056580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.056742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.056960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.056987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.574 [2024-07-15 16:41:50.057172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.574 [2024-07-15 16:41:50.057199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.574 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.057361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.057388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.057552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.057580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.057749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.057776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.057945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.057972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.058116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.058144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.058287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.058314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.058502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.058529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.058715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.058741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.058912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.058938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.059187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.059215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.059373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.059401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.059549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.059575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.059714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.059742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.059910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.059938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.060105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.060133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.060276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.060304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.060468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.060499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.060644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.060670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.060859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.060891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.061959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.061987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.062177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.062206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.062360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.062386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.062573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.062600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.062762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.062960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.062987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.063177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.063204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.063363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.063391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.063552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.063579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.063768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.063795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.063945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.063972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.064118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.064145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.064342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.064369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.064533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.064560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.064729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.064755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.064916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.064943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.065134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.065173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.065308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.065335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.065495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.065521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.065667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.065693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.065852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.065887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.066052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.066080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.066277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.066303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.066469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.066496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.066636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.066663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.066824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.066850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.067955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.067983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.068176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.068207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.068367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.068393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.068558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.068584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.068754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.068781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.068912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.068939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.069101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.069126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.069325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.069638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.069670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.069832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.069871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.575 qpair failed and we were unable to recover it. 00:25:10.575 [2024-07-15 16:41:50.070050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.575 [2024-07-15 16:41:50.070077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.070238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.070406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.070433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.070594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.070621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.070795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.070821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.071942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.071970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.072130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.072157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.072290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.072316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.072483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.072510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.072645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.072670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.072832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.072869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.073063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.073228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.073423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.073661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.073839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.073998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.074193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.074411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.074603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.074792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.074972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.074999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.075160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.075187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.075363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.075390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.075549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.075575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.075763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.075790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.075944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.075977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.076168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.076196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.076357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.076384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.076550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.076576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.076737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.076764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.076935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.076962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.077149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.077176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.077337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.077364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.077496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.077522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.077652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.077680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.077818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.077845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.078936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.078964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.079126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.079153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.079288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.079315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.079508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.079536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.079667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.079694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.576 [2024-07-15 16:41:50.079831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.576 [2024-07-15 16:41:50.079858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.576 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.080934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.080962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.081126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.081153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.081314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.081341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.081507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.081535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.081695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.081722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.081901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.081928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.082249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.082412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.082633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.082801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.082999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.083027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.083187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.083215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.083403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.083435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.083598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.083625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.083811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.083837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.084009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.084038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.084224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.084251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.084416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.084443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.084634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.084662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.084820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.084847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.085032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.085059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.085218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.085245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.085409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.085435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.085599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.085626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.085803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.085831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.086915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.086943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.087119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.087146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.087332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.087359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.087517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.087543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.087676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.087703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.087890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.087918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.088080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.088106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.088271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.088298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.088518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.088710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.088738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.088904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.088931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.089120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.089147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.089300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.089327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.089487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.089515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.089651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.089679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.089871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.089903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.090073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.090253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.090438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.090653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.090814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.577 [2024-07-15 16:41:50.090979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.577 [2024-07-15 16:41:50.091007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.577 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.091169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.091200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.091343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.091370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.091556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.091583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.091742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.091769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.091961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.091989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.092150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.092177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.092312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.092339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.092477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.092692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.092719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.092874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.092907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.093094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.093280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.093491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.093711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.093928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.093964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.578 [2024-07-15 16:41:50.094062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.094089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.094277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.094305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.094527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.094554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.094746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.094772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.094938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.094966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.095133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.095161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.095321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.095347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.095482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.095509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.095697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.095725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.095886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.095914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.096073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.096099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.096261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.096288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.096457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.096484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.096647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.096673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.096833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.096859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.097065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.097093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.097233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.097260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.097424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.097452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.097590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.097619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.097782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.097810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.098004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.098033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.098203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.098229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.098393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.098420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.098606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.098633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.098851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.098893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.099088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.099115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.099327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.099467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.099493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.099661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.099688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.099852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.099896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.100042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.100068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.100233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.100259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.100427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.100454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.100644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.100671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.100859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.100894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.101058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.101469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.101684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.101844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.101989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.102153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.102323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.102548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.102741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.102905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.102933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.103120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.103147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.103309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.103335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.103537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.103564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.103701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.103727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.103921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.103948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.104104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.104129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.578 [2024-07-15 16:41:50.104309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.578 [2024-07-15 16:41:50.104335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.578 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.104506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.104535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.104738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.104766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.104909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.104937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.105124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.105151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.105335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.105362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.105549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.105576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.105764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.105791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.105955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.105983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.106147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.106174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.106305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.106331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.106465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.106492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.106633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.106660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.106825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.106857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.107070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.107097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.107227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.107254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.107442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.107469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.107627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.107654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.107841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.107868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.108941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.108968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.109131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.109168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.109332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.109358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.109557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.109586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.109742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.109769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.109974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.110196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.110386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.110582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.110746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.110936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.110964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.111124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.111323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.111352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.111516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.111543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.111710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.111737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.111906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.111934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.112111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.112139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.112284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.112313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.112505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.112533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.112722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.112749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.112919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.112973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.113137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.113177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.113339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.113367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.113554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.113582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.113752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.113780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.113979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.114204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.114388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.114583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.114778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.114965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.114993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.115160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.115192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.115362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.115388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.115551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.115579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.115723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.115750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.115913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.115941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.116107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.116134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.116325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.116352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.116481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.116508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.579 qpair failed and we were unable to recover it. 00:25:10.579 [2024-07-15 16:41:50.116698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.579 [2024-07-15 16:41:50.116727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.116865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.116901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.117072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.117100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.117292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.117319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.117487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.117514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.117657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.117684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.117842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.117884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.118056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.118085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.118251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.118280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.118421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.118449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.118640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.118667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.118835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.118862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.119069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.119098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.119260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.119288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.119457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.119484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.119626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.119654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.119794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.119821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.120018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.120047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.120208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.120235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.120429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.120652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.120680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.120870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.120906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.121074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.121102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.121267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.121294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.121459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.121486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.121675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.121702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.121856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.121901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.122094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.122122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.122302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.122329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.122493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.122520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.122659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.122691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.122862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.122902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.123039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.123067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.123256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.123283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.123447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.123475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.123636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.123663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.123804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.123833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.124032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.124224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.124421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.124611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.124807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.124980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.125009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.125176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.125396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.125425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.125585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.125611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.125775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.125802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.125973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.126172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.126363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.126548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.126741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.126938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.126965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.127131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.127168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.127329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.127358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.127526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.127553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.127743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.127771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.127938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.127966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.128152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.128184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.128381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.128410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.128551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.128580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.128742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.128769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.128916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.128944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.129136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.129175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.129317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.129344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.129508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.129535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.129676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.129703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.580 [2024-07-15 16:41:50.129870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.580 [2024-07-15 16:41:50.129902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.580 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.130093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.130121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.130320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.130347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.130504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.130535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.130723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.130750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.130937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.130965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.131110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.131136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.131300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.131328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.131515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.131545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.131717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.131744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.131914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.131941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.132107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.132134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.132323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.132351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.132493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.132521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.132709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.132737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.132890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.132918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.133108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.133134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.133281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.133308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.133441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.133480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.133645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.133672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.133861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.133894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.134089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.134117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.134254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.134282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.134440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.134626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.134654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.581 [2024-07-15 16:41:50.134843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.581 [2024-07-15 16:41:50.134872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.581 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.135064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.135252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.135436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.135601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.135794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.135981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.136147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.136331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.136518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.136681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.136866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.136902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.137064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.137091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.137278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.137305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.137496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.137523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.137692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.137719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.137883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.137911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.138076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.138105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.138262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.138458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.138485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.138632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.138660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.138856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.139896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.139924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.140087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.140114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.140274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.140300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.140441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.140469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.140656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.140684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.140857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.140891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.141083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.141110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.141282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.141309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.141472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.141499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.141687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.141716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.141887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.141915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.142089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.142117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.142283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.142311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.142481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.142510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.142677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.142705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.142873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.142907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.143077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.143105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.143283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.143311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.143455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.143484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.143615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.143643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.143810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.143838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.144042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.144071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.144231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.144259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.144424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.867 [2024-07-15 16:41:50.144452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.867 qpair failed and we were unable to recover it. 00:25:10.867 [2024-07-15 16:41:50.144614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.144643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.144834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.144862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.145042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.145071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.145268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.145458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.145486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.145650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.145679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.145816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.146023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.146071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.146248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.146275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.146418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.146446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.146621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.146648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.146809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.146844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.147934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.147962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.148089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.148116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.148313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.148340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.148501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.148528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.148672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.148699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.148864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.148902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.149101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.149290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.149460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.149655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.149826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.149999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.150028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.150217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.150245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.150411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.150438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.150629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.150656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.150817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.150843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.151048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.151229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.151441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.151629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.151829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.151983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.152178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.152334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.152518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.152681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.152930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.152958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.153116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.153143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.153314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.153341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.153508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.153693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.153724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.153892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.153919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.154959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.154986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.155152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.155189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.155384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.155411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.155572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.155599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.155789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.155815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.155997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.156024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.156250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.156276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.156442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.156468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.156636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.156663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.156855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.156892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.157140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.157173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.157303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.157328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.157490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.157516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.157672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.157698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.157833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.157859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.158038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.158064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.158270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.158297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.158486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.158525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.158722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.158748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.158899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.158926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.159095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.159121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.159309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.159514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.159541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.159707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.159734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.159874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.159906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.868 qpair failed and we were unable to recover it. 00:25:10.868 [2024-07-15 16:41:50.160068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.868 [2024-07-15 16:41:50.160095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.160259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.160285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.160450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.160478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.160681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.160708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.160870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.160904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.161082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.161108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.161240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.161267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.161446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.161474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.161635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.161670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.161838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.161864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.162945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.162973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.163168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.163195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.163333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.163359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.163525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.163551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.163709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.163736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.163921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.163949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.164112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.164139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.164342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.164368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.164505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.164532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.164696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.164724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.164886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.164922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.165089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.165116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.165275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.165311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.165498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.165524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.165708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.165735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.165923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.165950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.166136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.166170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.166330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.166355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.166528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.166554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.166715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.166742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.166911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.166938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.167104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.167130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.167305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.167332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.167520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.167547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.167705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.167731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.167903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.167930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.168118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.168145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.168318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.168344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.168487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.168513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.168701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.168727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.168886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.168912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.169956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.169982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.170143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.170179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.869 [2024-07-15 16:41:50.170342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.869 [2024-07-15 16:41:50.170368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.869 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.170499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.170525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.170686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.170712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.170885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.170913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.171087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.171113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.171295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.171321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.171457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.171484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.171649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.171675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.171842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.171899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.172076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.172101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.172267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.172305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.172471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.172498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.172696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.172723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.172888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.172915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.173079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.173458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.173646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.173815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.173989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.174158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.174373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.174567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.174762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.174957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.174983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.175125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.175150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.175299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.175325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.175467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.175492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.175632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.175659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.175847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.175886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.176892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.176924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.177063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.177090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.177274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.177300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.177464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.177492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.177682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.177709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.177849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.177887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.178948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.179103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.179129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.179300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.179326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.179464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.179490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.179618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.179653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.179827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.179853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.180934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.180972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.181112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.181137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.181274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.181300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.181505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.181531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.181691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.181718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.181911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.181955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.182101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.182129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.182274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.182302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.182514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.182543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.182680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.182707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.182844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.182888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.183083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.183111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.183314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.183342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.870 [2024-07-15 16:41:50.183510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.870 [2024-07-15 16:41:50.183537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.870 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.183705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.183732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.183924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.183952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.184118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.184145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.184291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.184317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.184477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.184511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.184702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.184728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.184872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.184905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.185071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.185099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.185249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.185276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.185448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.185475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.185651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.185678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.185873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.185904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.186094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.186121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.186315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.186342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.186507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.186534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.186673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.186701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.186835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.186861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.187041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.187069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.187275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.187302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.187495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.187522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.187656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.187684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.187842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.187884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.188051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.188249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.188443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.188633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.188842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.188989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.189017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.189204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.189230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.189392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.189420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.189581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.189608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.189820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.189870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.190060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.190252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.190457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.190616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.190834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.190983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.191180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.191350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.191542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.191727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.191897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.191925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.192059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.192086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.192256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.192282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.192445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.192472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.192665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.192691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.192849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.192891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.193062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.193248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.193432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.193592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.193810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.193990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.194018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.194150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.194189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.194350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.194377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.194533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.194561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.194722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.194749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.195009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.195068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.195212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.195242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.195388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.195416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.871 [2024-07-15 16:41:50.195578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.871 [2024-07-15 16:41:50.195606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.871 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.195775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.195802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.195996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.196024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.196188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.196217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.196406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.196433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.196599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.196626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.196767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.196794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.196984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.197012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.197180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.197207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.197367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.197394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.197561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.197589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.197773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.197800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.198018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.198061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.198236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.198264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.198431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.198628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.198657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.198848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.198898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.199066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.199094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.199247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.199275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.199437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.199464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.199656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.199684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.199850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.199899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.200066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.200094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.200286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.200313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.200473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.200501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.200665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.200693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.200895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.200923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.201114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.201280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.201470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.201663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.201848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.201992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.202164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.202362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.202560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.202725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.202940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.202967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.203106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.203133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.203271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.203298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.203462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.203489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.203656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.203683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.203885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.203913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.204071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.204291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.204448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.204631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.204844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.204990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.205207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.205390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.205551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.205719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.205917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.205944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.206104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.206130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.206316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.206343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.206503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.206530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.206687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.206714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.206854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.206886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.207047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.207074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.207247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.872 [2024-07-15 16:41:50.207274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.872 qpair failed and we were unable to recover it. 00:25:10.872 [2024-07-15 16:41:50.207438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.207465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.207653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.207680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.207822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.207849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.207992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.208020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.208194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.208221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.208462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.208488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.208678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.208704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.208839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.208867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.209050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.209091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.209287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.209315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.209451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.209478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.209622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.209649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.209836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.210040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.210067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.210235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.210263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.210427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.210616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.210643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.210811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.210843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.211875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.211910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.212072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.212281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.212461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.212623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.212808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.212817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.873 [2024-07-15 16:41:50.212853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.873 [2024-07-15 16:41:50.212871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.873 [2024-07-15 16:41:50.212892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.873 [2024-07-15 16:41:50.212909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.873 [2024-07-15 16:41:50.212987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:10.873 [2024-07-15 16:41:50.213052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.213080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.213039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:10.873 [2024-07-15 16:41:50.213085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:10.873 [2024-07-15 16:41:50.213088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:10.873 [2024-07-15 16:41:50.213266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.213292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.213429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.213455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.213616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.213807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.213833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.213986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.214161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.214331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.214540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.214700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.214910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.214937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.215092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.215118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.215257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.215284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.215413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.215439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.215608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.215635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.215885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.215914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.216890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.216936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.217177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.217203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.217345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.217370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.217530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.217555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.217686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.217715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.217887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.217914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.218948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.218974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.219916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.219943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.220106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.220291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.220462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.220650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.220831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.220976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.221003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.873 qpair failed and we were unable to recover it. 00:25:10.873 [2024-07-15 16:41:50.221161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.873 [2024-07-15 16:41:50.221187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.221330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.221356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.221502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.221528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.221681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.221707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.221846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.221888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.222895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.222923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.223945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.223971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.224119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.224145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.224301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.224327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.224463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.224489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.224624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.224650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.224833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.224859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.225940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.225967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.226129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.226339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.226502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.226695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.226858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.226998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.227155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.227333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.227502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.227694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.227884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.227910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.228080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.228273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.228461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.228658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.228838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.228983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.229146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.229323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.229513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.229673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.229853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.229888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.230094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.230311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.230498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.230676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.230834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.230998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.231180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.231339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.231490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.231669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.231823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.231849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.232081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.232107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.232375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.232402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.232545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.232572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.232737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.232764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.874 [2024-07-15 16:41:50.232956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.874 [2024-07-15 16:41:50.232983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.874 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.233120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.233148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.233308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.233335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.233492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.233519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.233662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.233688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.233864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.233896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.234046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.234072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.234231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.234258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.234496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.234523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.234682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.234709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.234899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.234945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.235080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.235107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.235246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.235272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.235443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.235472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.235620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.235647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.235845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.235873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.236052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.236079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.236207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.236233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.236410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.236436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.236625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.236652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.236800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.236827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.237943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.237970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.238130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.238156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.238293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.238478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.238505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.238667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.238694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.238818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.238845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.239905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.239946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.240965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.240992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.241155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.241191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.241348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.241375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.241506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.241532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.241657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.241683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.241846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.241875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.242919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.242945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.243965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.243991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.244128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.244154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.244316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.875 [2024-07-15 16:41:50.244343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.875 qpair failed and we were unable to recover it. 00:25:10.875 [2024-07-15 16:41:50.244468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.244495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.244669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.244696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.244849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.244896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.245096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.245122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.245391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.245418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.245605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.245631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.245802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.245829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.245971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.246168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.246341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.246533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.246696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.246891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.246930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.247073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.247100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.247265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.247292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.247474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.247661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.247687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.247852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.247885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.248846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.248874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.249897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.249935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.250968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.250995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.251158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.251187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.251314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.251341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.251499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.251526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.251668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.251694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.251835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.251862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.252926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.252953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.253972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.253998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.254131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.254157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.254281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.254307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.254494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.876 [2024-07-15 16:41:50.254520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.876 qpair failed and we were unable to recover it. 00:25:10.876 [2024-07-15 16:41:50.254652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.254678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.254821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.254848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.255955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.255981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.256123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.256149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.256339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.256366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.256531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.256558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.256709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.256736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.256869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.256902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.257033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.257063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.257303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.257330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.257491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.257518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.257656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.257683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.257842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.257869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.258054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.258080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.258259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.258286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.258451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.258478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.258640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.258667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.258820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.258847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.259933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.259960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.260957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.260983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.261129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.261154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.261297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.261324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.261489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.261515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.261650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.261677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.261831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.261858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.262941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.262968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.263132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.263158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.263361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.263388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.263556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.263583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.263714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.263741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.263890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.263930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.264059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.877 [2024-07-15 16:41:50.264085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.877 qpair failed and we were unable to recover it. 00:25:10.877 [2024-07-15 16:41:50.264209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.264236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.264372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.264399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.264529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.264556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.264686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.264712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.264898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.264935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.265149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.265185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.265348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.265375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.265561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.265588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.265746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.265773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.265905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.265937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.266881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.266910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.267955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.267981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.268947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.268974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.269119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.269145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.269318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.269345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.269534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.269561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.269694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.269720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.269847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.269873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.270907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.270940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.271084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.271110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.271256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.271283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.271444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.271470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.271640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.271666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.271819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.271845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.272842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.272868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.273872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.273904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.274061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.274091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.274332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.274358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.274524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.274551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.878 qpair failed and we were unable to recover it. 00:25:10.878 [2024-07-15 16:41:50.274690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.878 [2024-07-15 16:41:50.274717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.274881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.274908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.275944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.275971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.276128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.276154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.276310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.276336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.276492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.276668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.276697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.276832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.277914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.277941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.278144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.278356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.278519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.278672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.278837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.278990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.279997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.280024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.280159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.280322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.280349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.280609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.280636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.280801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.280827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.281972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.281998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.282151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.282318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.282511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.282675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.282825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.282999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.283180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.283360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.283531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.283697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.283865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.283897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.284914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.284941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.285090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.285279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.285306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.285455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.285481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.879 [2024-07-15 16:41:50.285654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.879 [2024-07-15 16:41:50.285680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.879 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.285806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.285833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.285966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.285992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.286968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.286995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.287153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.287180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.287411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.287438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.287565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.287591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.287723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.287749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.287921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.287948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.288186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.288212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.288371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.288397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.288564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.288591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.288763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.288790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.288958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.288986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.289128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.289155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.289319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.289346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.289484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.289510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.289736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.289763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.289962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.289989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.290962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.290990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.291133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.291164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.291392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.291419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.291569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.291595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.291733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.291760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.291889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.291916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.292947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.292975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.293113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.293139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.293268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.293295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.293449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.293476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.293629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.293656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.293820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.293846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.294881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.294909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.295900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.295932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.296070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.296097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.296227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.296254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.880 [2024-07-15 16:41:50.296387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.880 [2024-07-15 16:41:50.296414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.880 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.296552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.296706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.296732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.296861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.296893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.297050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.297077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.297299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.297325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.297450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.297477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.297661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.297688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.297849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.297881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.298910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.298938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.299097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.299125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.299276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.299302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.299434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.299461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.299646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.299810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.299837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.300853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.300907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.301335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.301533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.301683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.301844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.301990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.302153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.302339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.302503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.302665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.302822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.302849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.303901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.304942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.304969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.305125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.305308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.305335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.305520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.305677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.305704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.305829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.305855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.306860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.306999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.307191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.307375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.307526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.307695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.307866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.307900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.308065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.308092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.308281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.881 [2024-07-15 16:41:50.308307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.881 qpair failed and we were unable to recover it. 00:25:10.881 [2024-07-15 16:41:50.308437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.308464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.308618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.308644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.308781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.308808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.308992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.309019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.309245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.309272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.309440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.309467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.309636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.309662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.309798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.309825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.309996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.310158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.310346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.310508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.310720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.310910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.310937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.311125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.311283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.311496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.311656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.311845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.311980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.312139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.312319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.312508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.312689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.312858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.312895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.313920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.313947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.314960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.314987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.315176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.315492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.315660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.315832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.315992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.316145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.316304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.316510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.316667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.316847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.316874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.317969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.317997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.318160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.318323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.318517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.318690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.318853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.318995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.319188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.319388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.319575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.319730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.319888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.319915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.320086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.882 [2024-07-15 16:41:50.320114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.882 qpair failed and we were unable to recover it. 00:25:10.882 [2024-07-15 16:41:50.320239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.320266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.320435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.320462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.320618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.320645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.320775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.320802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.320929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.320978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.321121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.321148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.321292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.321318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.321501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.321528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.321683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.321709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.321845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.321872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.322907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.322934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.323940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.323968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.324138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.324164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.324324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.324351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.324488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.324514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.324674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.324701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.324891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.324918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.325903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.325930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.326957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.326984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.327108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.327135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.327292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.327319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.327452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.327478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.327638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.327665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.327825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.327852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.328896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.328924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.329969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.330125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.330151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.330308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.330335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.330471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.330498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.330655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.330682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.330829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.330855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.331034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.331061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.883 [2024-07-15 16:41:50.331189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.883 [2024-07-15 16:41:50.331216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.883 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.331339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.331365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.331549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.331575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.331732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.331759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.331915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.331942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.332129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.332306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.332468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.332669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.332849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.332990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.333845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.333993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.334181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.334345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.334530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.334708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.334865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.334905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.335965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.335993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.336178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.336332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.336493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.336660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.336821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.336995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.337179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.337341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.337529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.337712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.337867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.337899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.338918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.338945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.339919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.340102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.340128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.340291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.340318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.340473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.340500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.340664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.340692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.340841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.340868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.341926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.341953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.342962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.342989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.343116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.343144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.884 qpair failed and we were unable to recover it. 00:25:10.884 [2024-07-15 16:41:50.343303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.884 [2024-07-15 16:41:50.343330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.343499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.343526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.343654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.343680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.343836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.343863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.344936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.344964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.345097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.345125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.345290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.345317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.345502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.345528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.345694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.345720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.345880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.345907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.346960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.346987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.347140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.347167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.347292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.347319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.347508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.347535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.347672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.347698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.347859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.347891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.348850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.348883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.349949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.349977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.350138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.350324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.350542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.350698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.350982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.351138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.351308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.351480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.351659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.351843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.351869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.352898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.352925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.353091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.353117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.353251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.885 [2024-07-15 16:41:50.353277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.885 qpair failed and we were unable to recover it. 00:25:10.885 [2024-07-15 16:41:50.353463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.353489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.353626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.353652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.353796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.353822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.353979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.354174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.354343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.354524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.354685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.354894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.354923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.355887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.355918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.356866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.356897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.357900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.357927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.358926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.358954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.359920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.359947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.360139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.360322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.360499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.360693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.360991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.361145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.361313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.361495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.361649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.361816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.361843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.362859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.362893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.363949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.363977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.364101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.886 [2024-07-15 16:41:50.364128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.886 qpair failed and we were unable to recover it. 00:25:10.886 [2024-07-15 16:41:50.364289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.364315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.364449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.364476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.364619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.364645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.364793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.364820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.364954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.364981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.365110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.365138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.365297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.365324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.365481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.365508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.365668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.365695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.365865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.365896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.366837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.366864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.367947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.367975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.368952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.368979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.369150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.369176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.369325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.369352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.369476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.369503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.369623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.369650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.369817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.369844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.370903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.370931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.371101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.371290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.371450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.371617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.371835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.371995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.372147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.372309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.372525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.372724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.372886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.372913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.373920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.373947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.374104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.374131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.374283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.374310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.374444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.374470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.374594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.374621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.887 [2024-07-15 16:41:50.374757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.887 [2024-07-15 16:41:50.374784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.887 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.374922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.374949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.375102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.375292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.375475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.375850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.375999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.376187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.376339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.376523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.376702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.376890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.377885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.377912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.378916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.378944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.379957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.379984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.380184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.380525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.380693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.380849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.380986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.381200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.381383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.381548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.381728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.381924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.381952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.382108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.382290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.382479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.382635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.382817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.382997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.383856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.383997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.384438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.384601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.384764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.384953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.384981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.385127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.385154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.385324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.385351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.385543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.385570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.888 qpair failed and we were unable to recover it. 00:25:10.888 [2024-07-15 16:41:50.385719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.888 [2024-07-15 16:41:50.385747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.385937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.385965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.386108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.386136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.386280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.386307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.386470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.386497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.386662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.386691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.386842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.386869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.387947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.387976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.388115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.388142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.388308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.388336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.388534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.388562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.388701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.388728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.388859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.388892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.389936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.389964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.390104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.390131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.390298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.390326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.390458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.390485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.390648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.390675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.390812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.390840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.391969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.391998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.392156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.392184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.392330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.392357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.392519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.392546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.392705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.392732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.392894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.392922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.393110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.393138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.393309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.393336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.393476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.393662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.393690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.393845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.393873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.394953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.394982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.395150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.395315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.395502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.395657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.395851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.395993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.396177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.396358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.396545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.396735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.889 [2024-07-15 16:41:50.396901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.889 [2024-07-15 16:41:50.396929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.889 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.397938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.397966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.398104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.398132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.398290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.398318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.398480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.398507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.398683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.398709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.398855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.398891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.399951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.399979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.400138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.400166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.400326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.400352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.400510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.400537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.400693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.400721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.400855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.401948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.401977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.402960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.402987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.403940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.403967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.404150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.404314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.404502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.404663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.404831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.404995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.405183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.405350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.405533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.405719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.405913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.405946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.406112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.406138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.406298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.406326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.406496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.406524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.406679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.406708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.406872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.406916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.890 [2024-07-15 16:41:50.407957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.890 [2024-07-15 16:41:50.407985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.890 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.408119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.408146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.408284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.408312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.408476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.408504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.408643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.408672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.408828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.408856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.409938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.409967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.410100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.410128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.410262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.410289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.410446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.410474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.410607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.410635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.410827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.410855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.411969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.411997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.412133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.412160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.412324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.412352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.412491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.412518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.412683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.412713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.412848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.413060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.413282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.413473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.413635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.413821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.413986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.414169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.414356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.414549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.414708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.414872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.414916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.415926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.415955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.416121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.416282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.416436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.416593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.416808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.416981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.417143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.417329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.417490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.417672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.417888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.417917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.891 qpair failed and we were unable to recover it. 00:25:10.891 [2024-07-15 16:41:50.418066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.891 [2024-07-15 16:41:50.418094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.418221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.418249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.418370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.418399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.418563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.418592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.418721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.418749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.418907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.418936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.419968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.419996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.420156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.420184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.420344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.420375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.420531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.420559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.420719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.420747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.420903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.420932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.421087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.421114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.421244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.421271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.421418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.421446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.421633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.421660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.421821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.421848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.422904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.422946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.423123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.423336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.423496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.423651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.423849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.423984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.424830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.424985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.425179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.425354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.425512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.425691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.425883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.425910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.426894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.426923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.427111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.427295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.427487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.427645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.427842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.427976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.428165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.428343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.428528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.428740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.428898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.428925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.892 qpair failed and we were unable to recover it. 00:25:10.892 [2024-07-15 16:41:50.429930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.892 [2024-07-15 16:41:50.429960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.430150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.430307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.430487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.430653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.430817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.430989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.431152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.431309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.431497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.431674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.431860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.431894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.432950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.432977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.433136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.433163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.433349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.433548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.433575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.433702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.433729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.433858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.433903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.434093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.434488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.434682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.434839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.434985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.435187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.435372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.435564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.435751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.435921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.435949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.436109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.436138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.436266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.436294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.436430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.436458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.436599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.436627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.436787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.436815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.437885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.437913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.438037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.438064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:10.893 [2024-07-15 16:41:50.438251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.893 [2024-07-15 16:41:50.438278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:10.893 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.438412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.438441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.438584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.438612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.438759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.438786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.438940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.439139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.439166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.439334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.439362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.180 [2024-07-15 16:41:50.439498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.180 [2024-07-15 16:41:50.439526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.180 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.439667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.439694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.439873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.439908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.440932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.440959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.441091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.441119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.441253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.441281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.441446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.441475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.441609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.441636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.441822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.441854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.442948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.442976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.443943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.443972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.444164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.444192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.444342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.444382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.444527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.444555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.444718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.444745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.444884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.444912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.445931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.445960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.446090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.446118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.446274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.446302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.446455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.446483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.446619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.181 [2024-07-15 16:41:50.446648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.181 qpair failed and we were unable to recover it. 00:25:11.181 [2024-07-15 16:41:50.446786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.446815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.447920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.447949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.448959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.448993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.449130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.449158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.449291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.449319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.449450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.449477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.449637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.449667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.449817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.449845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.450960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.450987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.451154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.451181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.451308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.451334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.451497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.451524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.451654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.451680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.451839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.451888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.452038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.452068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.182 [2024-07-15 16:41:50.452205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 16:41:50.452233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.182 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.452396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.452424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.452563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.452592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.452723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.452750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.452938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.452967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.453128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.453288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.453476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.453674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.453850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.453986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.454181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.454377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.454562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.454725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.454901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.454930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.455916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.455944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.456117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.456149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.456338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.456366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.456534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.456563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.456717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.456744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.456902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.456931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.457971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.457999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.458164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.458191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.458348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.458376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.458507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.458536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.458703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.458731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.458872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.458905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.459034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.459062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.459191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.459218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.459394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.459421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.183 [2024-07-15 16:41:50.459565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.183 [2024-07-15 16:41:50.459593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.183 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.459753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.459781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.459941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.459969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.460959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.460989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.461123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.461150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.461276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.461304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.461452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.461479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.461612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.461640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.461828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.461854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.462910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.462940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.463903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.463933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.464096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.464124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.464268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.464296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.464458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.464486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.464659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.464688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.464850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.464883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.465882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.465909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.466067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.466093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.466220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.466247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.466379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.466407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.184 qpair failed and we were unable to recover it. 00:25:11.184 [2024-07-15 16:41:50.466538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.184 [2024-07-15 16:41:50.466565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.466720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.466906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.466933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.467096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.467123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.467254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.467282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.467445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.467471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.467641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.467668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.467810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.467837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.468871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.468914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.469046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.469074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.472049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.472246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.472421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.472622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.472840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.472989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.473192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.473386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.473554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.473714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.473902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.473931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.474950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.474978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.475136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.185 [2024-07-15 16:41:50.475164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.185 qpair failed and we were unable to recover it. 00:25:11.185 [2024-07-15 16:41:50.475302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.475331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.475488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.475516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.475653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.475681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.475834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.475862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.476928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.190 [2024-07-15 16:41:50.476956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.190 qpair failed and we were unable to recover it. 00:25:11.190 [2024-07-15 16:41:50.477079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.477106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.477236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.477263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.477424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.477451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.477610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.477640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.477788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.477815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.477974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.478161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.478345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.478544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.478701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.478870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.478906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.479952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.479980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.480109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.480137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.480265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.480292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.480433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.480461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.480617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.480644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.480810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.480837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.481885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.481912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.482089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.482438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.482656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.482846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.482984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.483142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.483295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.483487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.483646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.483831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.483858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.484027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.484054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.191 qpair failed and we were unable to recover it. 00:25:11.191 [2024-07-15 16:41:50.484216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.191 [2024-07-15 16:41:50.484243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.484373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.484401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.484559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.484587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.484726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.484754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.484883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.484911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.485853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.486911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.486938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.487079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.487106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.487279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.487306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.487442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.487470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.487631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.487657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.487818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.487845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.488860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.488894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.489940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.489967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.490962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.490989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.491152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.192 [2024-07-15 16:41:50.491179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.192 qpair failed and we were unable to recover it. 00:25:11.192 [2024-07-15 16:41:50.491304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.491331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.491518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.491545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.491714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.491741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.491912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.491939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.492093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.492124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.492292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.492319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.492509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.492536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.492707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.492734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.492882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.492910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.493934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.493962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.494107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.494134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.494296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.494323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.494511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.494538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.494678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.494705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.494860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.494893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.495895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.495923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.496955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.496982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.497143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.497315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.497503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.497662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.497820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.497982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.498011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.498143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.498169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.498294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.498321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.498501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.193 [2024-07-15 16:41:50.498528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.193 qpair failed and we were unable to recover it. 00:25:11.193 [2024-07-15 16:41:50.498686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.498714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.498841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.498867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.499913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.499941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.500951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.501118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.501145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.501316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.501343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.501502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.501528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.501704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.501731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.501867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.501913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.502883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.502910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.503041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.503069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.503254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.503281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.503435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.503461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.503648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.503675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.503829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.503857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.504006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.504034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.504190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.504218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.504351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.504377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.504503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.504529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.194 [2024-07-15 16:41:50.504673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.194 [2024-07-15 16:41:50.504700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.194 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.504835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.504862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.504999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.505166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.505338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.505559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.505720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.505887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.505914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.506926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.506954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.507084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.507112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.507263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.507290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.507446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.507473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.507654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.507681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.507842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.507869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.508891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.508919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.509957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.509985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.510171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.510199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.510338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.510365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.510525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.510552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.510710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.510737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.510924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.510951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.511146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.511173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.511326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.511353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.511482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.511509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.511671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.511697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.511826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.195 [2024-07-15 16:41:50.511854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.195 qpair failed and we were unable to recover it. 00:25:11.195 [2024-07-15 16:41:50.512017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.512200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.512387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.512560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.512741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.512928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.512955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.513112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.513138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.513297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.513324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.513478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.513510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.513662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.513689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.513839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.513866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.514053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.514235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.514431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.514816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.514977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.515128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.515311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.515474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.515683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.515847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.515874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.516838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.516865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.517936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.517963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.518103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.518297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.518324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.518453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.518480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.518632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.518658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.518819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.518847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.519011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.519039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.196 qpair failed and we were unable to recover it. 00:25:11.196 [2024-07-15 16:41:50.519179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.196 [2024-07-15 16:41:50.519206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.519375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.519402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.519536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.519563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.519721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.519747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.519900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.519927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.520917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.520944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.521137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.521164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.521322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.521349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.521485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.521512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.521639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.521666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.521836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.521863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.522857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.522890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.523078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.523292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.523481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.523673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.523858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.523993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.524173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.524353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.524512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.524698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.524886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.524913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.525048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.525075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.525249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.525276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.525425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.525452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.525611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.525638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.525827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.525853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.526044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.526085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.526280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.197 [2024-07-15 16:41:50.526308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.197 qpair failed and we were unable to recover it. 00:25:11.197 [2024-07-15 16:41:50.526438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.526466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.526592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.526619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.526763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.526791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.526938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.526970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.527144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.527334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.527492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.527644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.527803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.527982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.528146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.528350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.528501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.528702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.528863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.529909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.529944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.530072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.530099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.530279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.530320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.530473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.530501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.530661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.530689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.530881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.530909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.531043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.531071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.531231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.531259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.531444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.531471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.531636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.531663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.531803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.531830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.532022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.532050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.532191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.532219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.532353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.532380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.198 [2024-07-15 16:41:50.532542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.198 [2024-07-15 16:41:50.532570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.198 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.532713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.532745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.532901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.532928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.533944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.533972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.534159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.534187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.534320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.534347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.534477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.534503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.534639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.534667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.534844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.534872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.535948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.535977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.536143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.536170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.536297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.536324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.536492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.536520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.536676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.536702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.536855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.537900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.537928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.538087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.538248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.538625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.538788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.538973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.539001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.539133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.539161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.539343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.539370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.539500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.539527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.199 [2024-07-15 16:41:50.539682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.199 [2024-07-15 16:41:50.539709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.199 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.539837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.539868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.540851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.540883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.541923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.541950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.542114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.542142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.542337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.542364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.542507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.542534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.542692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.542718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.542839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.542865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.543888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.543916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.544943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.544970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.545127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.545316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.545479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.545664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.545849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.545985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.546012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.546155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.546182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.546364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.546390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.546523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.546549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.200 [2024-07-15 16:41:50.546691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.200 [2024-07-15 16:41:50.546718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.200 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.546874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.546910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.547108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.547294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.547453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.547635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.547806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.547994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.548857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.548986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.549154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.549326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.549509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.549667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.549850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.549882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.550910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.551933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.551961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.552909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.552937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.553101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.553129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.553273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.553301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.201 qpair failed and we were unable to recover it. 00:25:11.201 [2024-07-15 16:41:50.553465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.201 [2024-07-15 16:41:50.553492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.553626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.553652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.553805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.553836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.554848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.554875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.555888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.555915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.556958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.556986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.557971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.557998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.558182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.558209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.558362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.558389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.558547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.558578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.558737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.558765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.558898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.558926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.559928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.559955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.560085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.560112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.560255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.560282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.560437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.202 [2024-07-15 16:41:50.560464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.202 qpair failed and we were unable to recover it. 00:25:11.202 [2024-07-15 16:41:50.560651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.560678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.560813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.560841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.560977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.561130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.561340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.561492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.561648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.561810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.561837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.562920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.562948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.563968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.563997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.564153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.564180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.564324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.564351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.564507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.564534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.564664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.564692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.564835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.564862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.565869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.565901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.566030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.566056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.203 [2024-07-15 16:41:50.566210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.203 [2024-07-15 16:41:50.566237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.203 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.566378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.566404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.566560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.566587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.566718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.566745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.566870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.566904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.567851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.567882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.568924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.568952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.569963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.569992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.570158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.570185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.570348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.570374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.570499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.570525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.570713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.570740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.570872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.570906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.571953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.571982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.572149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.572321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.572488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.572678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.572859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.572992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.204 [2024-07-15 16:41:50.573019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.204 qpair failed and we were unable to recover it. 00:25:11.204 [2024-07-15 16:41:50.573161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.573189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.573328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.573355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.573527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.573555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.573682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.573709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.573861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.573905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.574081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.574265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.574419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.574588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.574797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.574973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.575142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.575301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.575505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.575658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.575841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.575868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.576965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.576993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.577175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.577375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.577530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.577686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.577837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.577999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.578156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.578304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.578515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.578677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.578833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.579858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.579892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.580032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.205 qpair failed and we were unable to recover it. 00:25:11.205 [2024-07-15 16:41:50.580209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.205 [2024-07-15 16:41:50.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.580401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.580616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.580643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.580800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.580826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.580962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.580990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.581140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.581167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.581327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.581354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.581482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.581509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.581667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.581694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.581886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.581913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.582887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.582914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.583945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.583972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.584113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.584140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.584312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.584339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.584474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.584502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.584665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.584692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.584852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.584884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.585904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.585931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.586937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.587107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.206 [2024-07-15 16:41:50.587132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.206 qpair failed and we were unable to recover it. 00:25:11.206 [2024-07-15 16:41:50.587281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.587306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.587435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.587461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.587620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.587646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.587803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.587828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.587990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.588153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.588324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.588493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.588646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.588813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.588838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.589867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.589900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.590920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.590947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.591932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.591957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.592937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.592963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.207 [2024-07-15 16:41:50.593099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.207 [2024-07-15 16:41:50.593124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.207 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.593255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.593280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.593434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.593464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.593625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.593650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.593783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.593809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.593948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.593974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.594951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.594977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.595972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.595998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.596967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.596993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.597151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.597176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.597348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.597374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.597528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.597553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.597682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.597707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.597868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.597899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.598951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.208 [2024-07-15 16:41:50.598976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.208 qpair failed and we were unable to recover it. 00:25:11.208 [2024-07-15 16:41:50.599111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.599276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.599433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.599614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.599798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.599963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.599990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.600119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.600152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.600295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.600326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.600495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.600520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.600681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.600706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.600868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.600898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.601845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.601870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.602844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.602869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.603917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.603945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.604955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.209 [2024-07-15 16:41:50.604982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.209 qpair failed and we were unable to recover it. 00:25:11.209 [2024-07-15 16:41:50.605132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.605157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.605310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.605335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.605463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.605488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.605676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.605701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.605856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.605887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.606924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.606952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.607133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.607159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.607287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.607317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.607504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.607529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.607690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.607715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.607849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.607874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.608858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.608899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.609945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.609971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.610123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.610289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.610472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.610650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.610838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.610990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.611016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.210 [2024-07-15 16:41:50.611141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.210 [2024-07-15 16:41:50.611167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.210 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.611325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.611350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.611485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.611511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.611668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.611694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.611853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.611882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.612844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.612871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.613868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.613898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.614896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.614923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.615882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.615907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.616919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.616950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.617087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.617112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.617276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.617302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.617426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.211 [2024-07-15 16:41:50.617451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.211 qpair failed and we were unable to recover it. 00:25:11.211 [2024-07-15 16:41:50.617595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.617621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.617747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.617772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.617930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.617956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.618968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.618994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.619150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.619176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.619322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.619347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.619503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.619528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.619653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.619678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.619817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.620924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.620951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.621929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.621955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.622079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.212 [2024-07-15 16:41:50.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.212 qpair failed and we were unable to recover it. 00:25:11.212 [2024-07-15 16:41:50.622245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.622270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.622431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.622458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.622633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.622658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.622784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.622809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.622949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.622976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.623131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.623351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.623532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.623695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.623854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.623985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.624149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.624340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.624543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.624708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.624896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.624922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.625925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.626289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.626443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.626621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.626830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.626979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.627197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.627381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.627570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.627754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.627965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.627992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.628119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.628149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.628285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.628310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.628453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.628478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.628616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.213 [2024-07-15 16:41:50.628641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.213 qpair failed and we were unable to recover it. 00:25:11.213 [2024-07-15 16:41:50.628791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.628817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.628945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.628971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.629133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.629159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.629288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.629315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.629485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.629510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.629666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.629692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.629846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.629871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.630887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.630913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.631065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.631278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.631458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.631645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.631829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.631973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.632128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.632285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.632496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.632648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.632826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.632868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.633954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.633982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.634112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.634144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.634306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.634332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.214 qpair failed and we were unable to recover it. 00:25:11.214 [2024-07-15 16:41:50.634477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.214 [2024-07-15 16:41:50.634504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.634662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.634688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.634831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.634857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.635942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.635969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.636098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.636124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.636282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.636308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.636447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.636473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.636600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.636627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.636821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.636861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.637944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.637972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.638130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.638167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.638329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.638355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.638482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.638509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.638672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.638699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.638856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.638890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.639903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.639953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.640121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.640158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.640290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.640316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.640442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.640468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.640600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.640628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.640828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.641033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.641072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.641260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.215 [2024-07-15 16:41:50.641288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.215 qpair failed and we were unable to recover it. 00:25:11.215 [2024-07-15 16:41:50.641438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.641464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.641609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.641797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.641822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.641981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.642837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.642988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.643141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.643309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.643491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.643702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.643869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.643914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.644935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.644965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.645136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.645163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.645291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.645317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.645487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.645513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.645644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.645670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.645827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.645852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.646905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.646931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.647970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.647996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.648153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.648178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.216 qpair failed and we were unable to recover it. 00:25:11.216 [2024-07-15 16:41:50.648320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.216 [2024-07-15 16:41:50.648345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.648512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.648538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.648667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.648692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.648831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.648856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.649904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.649931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.650938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.650964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.651119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.651302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.651507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.651696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.651848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.651983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.652203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.652371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.652530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.652743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.652926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.652952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.653970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.217 [2024-07-15 16:41:50.653996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.217 qpair failed and we were unable to recover it. 00:25:11.217 [2024-07-15 16:41:50.654135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.654161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.654321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.654346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.654507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.654546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.654715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.654742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.654912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.654951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.655123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.655150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.655341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.655367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.655497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.655650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.655675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.655864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.655903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.656967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.656999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.657159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.657185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.657321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.657345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.657498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.657523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.657715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.657741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.657881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.657907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.658947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.658974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.659944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.659969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.660968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.660994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.218 [2024-07-15 16:41:50.661157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.218 [2024-07-15 16:41:50.661182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.218 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.661355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.661381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.661565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.661591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.661756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.661782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.661922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.662094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.662120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.662282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.662307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.662487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.662512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.662670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.662695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.662854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.662893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.663937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.663962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.664959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.664986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.665966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.665993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.666960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.666985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.667148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.667336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.667484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.667651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.667997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.668021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.668182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.219 [2024-07-15 16:41:50.668208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.219 qpair failed and we were unable to recover it. 00:25:11.219 [2024-07-15 16:41:50.668330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.668355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.668497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.668522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.668688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.668718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.668852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.668884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.669904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.669930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.670928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.670954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.671171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.671325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.671491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.671656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.671810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.671998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.672157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.672323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.672492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.672654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.672840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.672864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.673848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.673872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.674067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.674227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.674444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.674630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.674814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.674990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.675149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.675365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.675547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.675736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.675908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.220 [2024-07-15 16:41:50.675937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.220 qpair failed and we were unable to recover it. 00:25:11.220 [2024-07-15 16:41:50.676066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.676228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.676389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.676546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.676724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.676888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.676915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.677955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.677981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.678161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.678186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.678346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.678372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.678515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.678540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.678666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.678691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.678850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.678882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.679884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.679910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.680091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.680288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.680460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.680623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.680804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.680985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.681153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.681334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.681485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.681673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.681852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.681882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.682912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.221 [2024-07-15 16:41:50.682938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.221 qpair failed and we were unable to recover it. 00:25:11.221 [2024-07-15 16:41:50.683082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.683233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.683419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.683573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.683772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.683953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.683980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.684134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.684160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.684289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.684314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.684470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.684496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.684642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.684826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.684851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.685929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.685955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.686137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.686298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.686467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.686662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.686824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.686988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.687855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.687994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.688145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.688341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.688538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.688718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.688906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.688933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.689122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.689310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.689470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.689653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.689818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.689992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.690181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.690369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.690522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.690704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.690858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.690889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.691013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.222 [2024-07-15 16:41:50.691038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.222 qpair failed and we were unable to recover it. 00:25:11.222 [2024-07-15 16:41:50.691194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.691220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.691370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.691395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.691554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.691579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.691710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.691735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.691922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.691948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.692098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.692300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.692465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.692616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.692829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.692986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.693183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.693377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.693547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.693700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.693904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.693931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.694088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.694113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.694300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.694325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.694487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.694512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.694670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.694695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.694833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.694858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.695927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.695953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.696935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.696964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.697964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.698160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.698184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.698348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.698375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.698509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.698535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.698691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.698716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.698881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.698908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.699066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.223 [2024-07-15 16:41:50.699092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.223 qpair failed and we were unable to recover it. 00:25:11.223 [2024-07-15 16:41:50.699266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.699291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.699428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.699453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.699610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.699637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.699798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.699823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.699974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.700162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.700332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.700521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.700682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.700898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.700924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.701930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.701957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.702121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.702146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.702299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.702325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.702456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.702480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.702635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.702661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.702811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.702836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.703901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.703926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.704286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.704471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.704636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.704852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.704992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.705181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.705367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.705535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.705691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.705871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.705902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.706903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.706929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.707093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.707118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.707258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.707284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.224 qpair failed and we were unable to recover it. 00:25:11.224 [2024-07-15 16:41:50.707419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.224 [2024-07-15 16:41:50.707445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.707581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.707606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.707739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.707764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.707920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.707946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.708117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.708143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.708308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.708333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.708467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.708497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.708631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.708657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.708827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.708852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.709894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.709921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.710925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.710952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.711969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.711995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.712131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.712158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.712294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.712319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.712505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.712531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.712659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.712683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.712842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.712869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.713886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.713913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.714059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.714085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.714222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.225 [2024-07-15 16:41:50.714247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.225 qpair failed and we were unable to recover it. 00:25:11.225 [2024-07-15 16:41:50.714422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.714447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.714582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.714608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.714738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.714763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.714898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.714925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.715968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.715995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.716128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.716152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.716318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.716343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.716503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.716529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.716667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.716694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.716847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.716872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.717903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.717930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.718302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.718481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.718642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.718862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.718997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.719172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.719335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.719519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.719679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.719866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.720882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.720910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.721042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.721070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.721228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.721253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.721427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.721452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.721606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.721632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.721815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.721841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.226 qpair failed and we were unable to recover it. 00:25:11.226 [2024-07-15 16:41:50.722936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.226 [2024-07-15 16:41:50.722962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.723128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.723283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.723456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.723665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.723843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.723996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.724164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.724377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.724536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.724708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.724897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.724924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.725939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.725965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.726122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.726303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.726465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.726648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.726811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.726979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.727148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.727340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.727526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.727698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.727886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.727912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.728952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.728977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.729961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.729987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.730145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.730170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.730327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.730352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.730487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.730513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.730686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.730712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.730873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.730913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.731062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.731087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.731282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.731307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.731440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.227 [2024-07-15 16:41:50.731465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.227 qpair failed and we were unable to recover it. 00:25:11.227 [2024-07-15 16:41:50.731630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.731656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.731821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.731847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.732907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.732937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.733093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.733120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.733285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.733311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.733440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.733465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.733638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.733663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.733826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.733852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.734871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.734901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.735910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.735935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.736121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.736288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.736478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.736657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.736831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.736978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.737140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.737299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.737491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.737668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.737826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.737852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.738911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.738938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.739073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.739099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.739261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.739286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.228 qpair failed and we were unable to recover it. 00:25:11.228 [2024-07-15 16:41:50.739428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.228 [2024-07-15 16:41:50.739457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.739594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.739619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.739772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.739797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.739956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.739982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.740111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.740137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.740329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.740355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.740505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.740530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.740682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.740708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.740845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.740870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.741902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.741929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.742899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.742926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.743098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.743123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.743265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.743295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.743456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.743482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.743641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.743666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.743799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.743824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.744912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.745127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.229 [2024-07-15 16:41:50.745152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.229 qpair failed and we were unable to recover it. 00:25:11.229 [2024-07-15 16:41:50.745285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.745310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.745473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.745499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.745656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.745681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.745874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.745905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.746090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.746273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.746481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.746667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.746825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.746987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.747167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.747329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.747501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.747695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.747913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.748848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.748874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.749903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.749930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.750079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.750105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.750278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.750304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.750466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.750492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.750621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.750646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.750789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.750815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.751035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.751207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.751435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.751613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.751800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.751992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.230 [2024-07-15 16:41:50.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.230 qpair failed and we were unable to recover it. 00:25:11.230 [2024-07-15 16:41:50.752156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.231 [2024-07-15 16:41:50.752181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.231 qpair failed and we were unable to recover it. 00:25:11.231 [2024-07-15 16:41:50.752311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.231 [2024-07-15 16:41:50.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.231 qpair failed and we were unable to recover it. 00:25:11.231 [2024-07-15 16:41:50.752479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.231 [2024-07-15 16:41:50.752505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.231 qpair failed and we were unable to recover it. 00:25:11.231 [2024-07-15 16:41:50.752690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.231 [2024-07-15 16:41:50.752716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.231 qpair failed and we were unable to recover it. 00:25:11.231 [2024-07-15 16:41:50.752851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.231 [2024-07-15 16:41:50.752883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.231 qpair failed and we were unable to recover it. 00:25:11.231 [2024-07-15 16:41:50.753034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.753197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.753387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.753581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.753744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.753913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.753938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.754128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.754311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.754468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.754649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.515 [2024-07-15 16:41:50.754803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.515 qpair failed and we were unable to recover it. 00:25:11.515 [2024-07-15 16:41:50.754964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.754990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.755135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.755160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.755328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.755354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.755513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.755538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.755707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.755732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.755866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.755899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.756054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.756233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.756457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.756621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.756830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.756973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.757189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.757343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.757499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.757686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.757843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.757868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.758896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.758922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.759923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.759949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.760144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.760300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.760452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.760610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.760808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.760977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.761191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.761216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.761348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.761373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.761508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.761534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.516 [2024-07-15 16:41:50.761689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.516 [2024-07-15 16:41:50.761714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.516 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.761880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.762909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.762935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.763114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.763288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.763468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.763628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.763807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.763981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.764165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.764322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.764504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.764661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.764812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.764837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.765948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.765974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.766156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.766334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.766489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.766665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.766825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.766987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.767145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.767300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.767502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.767672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.767828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.767855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.768032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.768246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.768404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.768563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.517 [2024-07-15 16:41:50.768757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.517 qpair failed and we were unable to recover it. 00:25:11.517 [2024-07-15 16:41:50.768933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.769145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.769329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.769674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.769855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.769999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.770151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.770329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.770510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.770668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.770886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.770913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.771886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.771911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.772937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.772963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.773103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.773128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.773285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.773310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.773437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.773464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.773648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.773673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.773811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.773837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.774851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.774882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.518 qpair failed and we were unable to recover it. 00:25:11.518 [2024-07-15 16:41:50.775893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.518 [2024-07-15 16:41:50.775920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.776093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.776280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.776464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.776634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.776819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.776976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.777134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.777318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.777487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.777666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.777825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.778890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.778916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.779918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.779944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.780132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.780299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.780467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.780647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.780830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.780986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.781147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.781331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.781515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.781667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.781857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.781887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.782029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.782055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.782253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.782279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.782423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.782448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.782588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.519 [2024-07-15 16:41:50.782615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.519 qpair failed and we were unable to recover it. 00:25:11.519 [2024-07-15 16:41:50.782774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.782799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.782957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.782983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.783142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.783167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.783325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.783351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.783539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.783564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.783697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.783722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.783866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.783898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.784086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.784266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.784464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.784844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.784989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.785175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.785337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.785516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.785680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.785847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.785872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.786890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.786916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.787908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.787934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.520 [2024-07-15 16:41:50.788925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.520 [2024-07-15 16:41:50.788952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.520 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.789926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.789953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.790931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.791964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.791990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.792164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.792327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.792480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.792669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.792821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.792980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.793191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.793389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.793571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.793737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.793903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.793928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.794911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.794937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.521 qpair failed and we were unable to recover it. 00:25:11.521 [2024-07-15 16:41:50.795958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.521 [2024-07-15 16:41:50.795985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.796172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.796333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.796500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.796665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.796830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.796992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.797146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.797345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.797563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.797749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.797912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.797938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.798105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.798130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.798278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.798304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.798463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.798490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.798652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.798677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.798828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.798853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.799969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.799995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.800159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.800185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.800334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.800359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.800491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.800518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.800653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.800683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.800827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.800853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.801932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.801957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.802145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.802302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.802488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.802652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.802834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.802987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.803013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.803159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.803185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.522 qpair failed and we were unable to recover it. 00:25:11.522 [2024-07-15 16:41:50.803347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.522 [2024-07-15 16:41:50.803372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.803539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.803565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.803736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.803762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.803889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.803916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.804916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.804942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.805964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.805990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.806139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.806165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.806351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.806377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.806538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.806563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.806750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.806775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.806908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.806934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.807115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.807141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.807272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.807298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.807483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.807508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.807674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.807701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.807860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.807897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.808950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.808976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.809165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.809191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.809376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.809401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.809529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.809554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.809715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.809741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.809911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.809937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.810095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.810121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.810248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.810274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.810409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.810434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.810579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.523 [2024-07-15 16:41:50.810605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.523 qpair failed and we were unable to recover it. 00:25:11.523 [2024-07-15 16:41:50.810766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.810792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.810925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.810950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.811098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.811269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.811454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.811640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.811841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.811980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.812006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.812185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.812210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.812376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.812402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.812567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.812593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.812771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.812809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.524 [2024-07-15 16:41:50.812991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.524 [2024-07-15 16:41:50.813020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.524 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.813190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.813216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.813359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.813385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.813564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.813590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.813722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.813748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.813909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.813935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.814934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.814961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.815150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.815181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.815345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.815371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.815531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.815557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.815697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.815725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.815890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.815917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.816101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.816263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.816477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.816690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.816848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.816996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.817180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.817346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.817527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.817713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.817887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.817914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.818898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.818924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.819917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.819949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.820126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.525 [2024-07-15 16:41:50.820152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.525 qpair failed and we were unable to recover it. 00:25:11.525 [2024-07-15 16:41:50.820318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.820343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.820468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.820494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.820663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.820690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.820848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.820872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.821916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.821942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.822104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.822296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.822463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.822669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.822855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.822989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.823149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.823337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.823556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.823715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.823899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.823925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.824946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.824971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.825297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.825509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.825670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.825991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.826164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.826323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.826479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.826657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.826820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.826844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.827002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.827033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.827201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.827228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.526 qpair failed and we were unable to recover it. 00:25:11.526 [2024-07-15 16:41:50.827355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.526 [2024-07-15 16:41:50.827381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.827527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.827555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.827684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.827711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.827869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.827903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.828917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.828945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.829138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.829330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.829494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.829655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.829841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.829990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.830174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.830361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.830553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.830731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.830886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.830913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.831917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.831944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.832107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.832133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.832270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.832297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.832445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.832472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.832633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.832660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.832816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.832843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.833841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.833867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.834017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.834044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.834207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.834234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.527 [2024-07-15 16:41:50.834365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.527 [2024-07-15 16:41:50.834392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.527 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.834530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.834560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.834693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.834719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.834881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.834908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.835968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.835995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.836126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.836151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.836282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.836312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.836501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.836527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.836692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.836718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.836850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.836881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.837842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.837867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.838926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.838952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.839893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.839920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.840881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.840917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.841103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.841129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.841263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.528 [2024-07-15 16:41:50.841289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.528 qpair failed and we were unable to recover it. 00:25:11.528 [2024-07-15 16:41:50.841416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.841441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.841582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.841608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.841762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.841788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.841928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.841954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.842115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.842266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.842485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.842642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.842822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.842982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.843143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.843337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.843484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.843685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.843866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.843910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.844072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.844238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.844434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.844807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.844974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.845149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.845304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.845462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.845681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.845868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.845899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.846869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.846900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.847037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.847063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.847225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.847251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.847413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.847440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.529 qpair failed and we were unable to recover it. 00:25:11.529 [2024-07-15 16:41:50.847577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.529 [2024-07-15 16:41:50.847602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.847737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.847764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.847926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.847958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.848131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.848157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.848288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.848314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.848443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.848470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.848635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.848661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.848816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.849838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.849864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.850031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.850217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.850411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.850625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.850816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.850990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.851152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.851339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.851501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.851689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.851852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.851889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.852961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.852988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.853968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.853995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.854158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.854185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.854315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.854341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.854474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.854500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.530 [2024-07-15 16:41:50.854641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.530 [2024-07-15 16:41:50.854666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.530 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.854797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.854984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.855178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.855335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.855503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.855702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.855887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.855913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.856954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.856981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.857155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.857181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.857305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.857331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.857492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.857518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.857647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.857673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.857835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.858937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.858963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.859945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.859972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.860133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.860454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.860655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.860841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.860994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.861021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.861144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.861170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.861345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.861371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.861512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.861539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.531 qpair failed and we were unable to recover it. 00:25:11.531 [2024-07-15 16:41:50.861669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.531 [2024-07-15 16:41:50.861695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.861821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.861847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.861991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.862217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.862379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.862570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.862730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.862901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.862927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.863940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.863966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.864133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.864160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.864293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.864321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.864455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.864481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.864623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.864811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.864838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.865855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.865994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.866171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.866360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.866523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.866713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.866873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.866907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.867907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.867935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.868132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.868159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.868322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.868348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.868481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.868508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.868635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.532 [2024-07-15 16:41:50.868661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.532 qpair failed and we were unable to recover it. 00:25:11.532 [2024-07-15 16:41:50.868791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.868817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.869859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.869998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.870189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.870352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.870512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.870664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.870815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.870842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.871907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.871935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.872958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.872985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.873131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.873156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.873315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.873341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.873484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.873510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.873676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.873702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.873845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.873872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.533 qpair failed and we were unable to recover it. 00:25:11.533 [2024-07-15 16:41:50.874929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.533 [2024-07-15 16:41:50.874956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.875936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.875968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.876970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.876997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.877136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.877162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.877309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.877335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.877466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.877492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.877637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.877664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.877853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.877884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.878913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.878940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.879910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.879937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.880068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.880094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.880239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.880266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.880422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.880448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.880609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.534 [2024-07-15 16:41:50.880636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.534 qpair failed and we were unable to recover it. 00:25:11.534 [2024-07-15 16:41:50.880769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.880795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.880984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.881137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.881303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.881487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.881685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.881847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.881873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.882854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.882897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.883898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.883925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.884114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.884307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.884470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.884627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.884986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.885014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.885180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.885206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.885370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.885396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.885531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.535 [2024-07-15 16:41:50.885557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.535 qpair failed and we were unable to recover it. 00:25:11.535 [2024-07-15 16:41:50.885703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.885731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.885915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.885942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.886121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.886291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.886458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.886627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.886803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.886976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.887160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.887382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.887564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.887725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.887886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.887913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.888124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.888319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.888503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.888689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.888845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.888986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.889154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.889355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.889505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.889669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.889848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.889874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.890019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.890052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.890212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.890239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.890380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.890409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.890546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.890573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.536 qpair failed and we were unable to recover it. 00:25:11.536 [2024-07-15 16:41:50.890705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.536 [2024-07-15 16:41:50.890732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.890890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.890916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.891963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.891991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.892172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.892198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.892356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.892383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.892545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.892572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.892710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.892737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.892888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.892915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.893952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.893978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.894133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.894159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.894327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.894354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.894475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.894501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.894657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.894683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.894816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.894842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.895012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.895039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.895163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.895189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.895354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.895380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.895510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.537 [2024-07-15 16:41:50.895537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 16:41:50.895683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.895710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.895868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.895901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.896915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.896942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.897112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.897297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.897457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.897611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.897827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.897976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.898003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.898176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.898349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.898376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.898567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.898593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.898779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.898804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.898977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.899843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.899988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.900015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.900175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.900201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.900348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.900374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.900499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.538 [2024-07-15 16:41:50.900525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 16:41:50.900671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.900697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.900858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.900900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.901928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.901955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.902147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.902174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.902337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.902364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.902491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.902517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.902672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.902698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.902837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.902863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.903900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.903927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.904972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.904999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.905158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.905185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.905372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.905399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.539 qpair failed and we were unable to recover it. 00:25:11.539 [2024-07-15 16:41:50.905530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.539 [2024-07-15 16:41:50.905558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.905703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.905729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.905891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.905918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.906961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.906989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.907129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.907155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.907317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.907344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.907489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.907516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.907677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.907704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.907864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.907908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.908860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.908895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.909896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.909923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.910925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.910952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.911081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.911107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.911247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.911278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.911462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.911488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.911647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.911673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.911801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.911827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.912015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.912042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.912174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.912200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.912346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.540 [2024-07-15 16:41:50.912372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.540 qpair failed and we were unable to recover it. 00:25:11.540 [2024-07-15 16:41:50.912499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.912525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.912684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.912711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.912867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.912899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.913856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.913900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.914886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.914913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.915898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.915925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.916074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.916101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.916262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.916289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.916417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.916443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.916615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.916641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.916797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.916824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33e8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.917945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.917972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.918959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.918985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.919111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.919137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.919264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.541 [2024-07-15 16:41:50.919290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.541 qpair failed and we were unable to recover it. 00:25:11.541 [2024-07-15 16:41:50.919423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.919450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.919614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.919641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.919800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.919826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.919964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.919990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.920115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.920141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.920305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.920331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.920518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.920544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.920672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.920698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.920822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.920847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.921867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.921996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.922198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.922366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.922575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.922735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.922909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.922936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.923119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.923318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.923489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.923675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.923869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.923997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.924951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.924977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.925147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.925304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.925466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.925645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.925809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.925982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.926009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.542 [2024-07-15 16:41:50.926166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.542 [2024-07-15 16:41:50.926192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.542 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.926321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.926346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.926509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.926534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.926686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.926712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.926871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.926903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.927920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.927946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.928914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.928941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.929901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.929928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.930889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.930916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.931112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.931271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.931438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.931626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.931993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.932023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.543 qpair failed and we were unable to recover it. 00:25:11.543 [2024-07-15 16:41:50.932181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-07-15 16:41:50.932207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.932337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.932362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.932484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.932510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.932632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.932657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.932786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.932812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.932982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.933967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.934205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.934355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.934502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.934677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.934857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.934991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.935170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.935362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.935520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.935691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.935856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.935898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.936891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.936917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.937843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.937871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.938038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.938064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.938186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.938212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.938343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.938368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.938522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.938547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.544 [2024-07-15 16:41:50.938703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-07-15 16:41:50.938729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.544 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.938895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.938921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.939892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.939918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.940895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.940921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.941894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.941919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.942954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.942980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.943154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.943180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.943337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.943367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.943530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.943555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.943683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.943709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.943839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.943865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.944835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.944860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.945033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.945059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-07-15 16:41:50.945205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-07-15 16:41:50.945232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.945355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.945380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.945514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.945539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.945702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.945727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.945887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.945913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.946927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.946953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.947140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.947307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.947465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.947648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.947835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.947974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.948936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.948962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.949117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.949142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.949283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.949445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.949470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.949622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.949648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.949817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.949843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.950855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.950885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.951047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.951073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.951231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.951256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.951389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-07-15 16:41:50.951415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.546 qpair failed and we were unable to recover it. 00:25:11.546 [2024-07-15 16:41:50.951540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.951565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.951729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.951755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.951911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.951937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.952959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.952985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.953159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.953184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.953346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.953371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.953496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.953521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.953656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.953682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.953858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.953888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.954867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.954899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.955864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.955908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.956070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.956096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.956256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.956281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.956447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.956473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.956649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.956674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.956829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.956854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.547 qpair failed and we were unable to recover it. 00:25:11.547 [2024-07-15 16:41:50.957969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.547 [2024-07-15 16:41:50.957996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.958138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.958164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.958319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.958344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.958489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.958514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.958653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.958679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.958832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.958858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.959817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.959842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.960849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.960977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.961971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.961997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.962143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.962169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.962357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.962382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.962523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.962548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.962690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.962715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.962845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.962870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.963033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.963058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.963187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.963213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.963340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.963364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.963526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.548 [2024-07-15 16:41:50.963552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.548 qpair failed and we were unable to recover it. 00:25:11.548 [2024-07-15 16:41:50.963709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.963874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.963904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.964281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.964464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.964639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.964824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.964983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.965163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.965353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.965512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.965677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.965868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.965901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.966964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.966992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.967920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.967947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.968930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.968956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.969136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.969291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.969446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.969605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.969785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.969978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.970004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.970139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.970165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.970319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.970344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.549 [2024-07-15 16:41:50.970517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.549 [2024-07-15 16:41:50.970543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.549 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.970673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.970699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.970830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.970993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.971965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.971991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.972933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.972959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.973142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.973306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.973489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.973656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.973845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.973986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.974956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.974981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.975923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.975948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.976895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.976920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.550 qpair failed and we were unable to recover it. 00:25:11.550 [2024-07-15 16:41:50.977050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.550 [2024-07-15 16:41:50.977075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.977261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.977285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.977416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.977441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.977583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.977612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.977745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.977772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.977936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.977962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.978119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.978146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.978321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.978346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.978518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.978543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.978720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.978745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.978902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.978928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.979085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.979111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.979295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.979320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.979478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.979503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.979662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.979815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.979840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.980927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.980953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.981082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.981108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.981261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.981286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.981457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.981482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.981664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.981690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.981847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.981872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.982852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.982883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.983950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.983976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.551 qpair failed and we were unable to recover it. 00:25:11.551 [2024-07-15 16:41:50.984110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.551 [2024-07-15 16:41:50.984136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.984272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.984298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.984457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.984483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.984619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.984652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.984794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.984820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.985859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.985988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.986176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.986340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.986521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.986708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.986856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.986887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.987916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.987943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.988939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.988965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.989138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.989163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.989293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.989319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.989479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.989504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.989660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.989685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.989823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.989850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.990019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.990044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.552 [2024-07-15 16:41:50.990181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.552 [2024-07-15 16:41:50.990206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.552 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.990342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.990367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.990497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.990522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.990691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.990717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.990869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.990901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.991887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.991913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.992884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.992909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.993909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.993935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.994906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.994932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 [2024-07-15 16:41:50.995921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.995946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 16:41:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.553 [2024-07-15 16:41:50.996105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.996134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 16:41:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 16:41:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.553 [2024-07-15 16:41:50.996275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 [2024-07-15 16:41:50.996301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.553 16:41:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.553 [2024-07-15 16:41:50.996442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.553 16:41:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.553 [2024-07-15 16:41:50.996469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.553 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.996614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.996639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.996795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.996821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.996953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.996979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.997121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.997147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.997315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.997341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.997525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.997550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.997716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.997742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.997865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.997908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.998100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.998295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.998455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.998638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.998807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.998975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.999135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.999312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.999500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.999670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:50.999846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:50.999871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.000887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.000913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.001873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.001906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.002095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.002278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.002465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.002626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.002815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.002981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.003009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.003149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.003175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.003314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.003341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.554 [2024-07-15 16:41:51.003472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.554 [2024-07-15 16:41:51.003498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.554 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.003673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.003832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.003859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.004863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.004897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.005895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.005921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.006926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.006953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.007080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.007106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.007267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.007293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.007434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.007462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.007640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.007666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.007842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.007872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.008899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.008926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.009941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.009968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.010105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.010132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.010299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.010335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.555 qpair failed and we were unable to recover it. 00:25:11.555 [2024-07-15 16:41:51.010490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.555 [2024-07-15 16:41:51.010516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.010665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.010691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.010862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.011937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.011963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.012149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.012478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.012634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.012821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.012994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.013020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.013149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.013176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.556 [2024-07-15 16:41:51.013326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.556 [2024-07-15 16:41:51.013352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.556 [2024-07-15 16:41:51.013543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 [2024-07-15 16:41:51.013569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.013732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.013759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.013892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.013919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.014866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.014911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.015954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.015980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.016132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.016322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.016506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.016663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.016836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.016999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.017025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.017157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.017183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.017305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.556 [2024-07-15 16:41:51.017331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.556 qpair failed and we were unable to recover it. 00:25:11.556 [2024-07-15 16:41:51.017493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.017518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.017652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.017678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.017799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.017825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.017983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.018152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.018367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.018547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.018700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.018862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.018893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.019848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.019873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.020814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.020995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.021166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.021323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.021478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.021666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.021845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.021870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.022042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.022221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.022497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.022666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.022852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.557 [2024-07-15 16:41:51.022996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.557 [2024-07-15 16:41:51.023022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.557 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.023156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.023181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.023384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.023414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.023623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.023649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.023786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.023980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.024141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.024321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.024503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.024681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.024860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.024891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.025938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.025964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.026131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.026156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.026327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.026352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.026565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.026591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.026757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.026783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.026937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.026963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.027936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.027962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.028121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f8000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.028349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.028387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.028542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.028570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.028732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.028758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.028897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.028925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.029956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.029983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.030115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.030285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.030311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.030502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.030528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.030665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.030696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.030871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.030902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.558 [2024-07-15 16:41:51.031059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.558 [2024-07-15 16:41:51.031086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.558 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.031222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.031248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.031413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.031439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.031600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.031626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.031813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.031839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.032899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.032925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.033068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.033094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.033257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.033283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.033412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.033438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.033630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.033656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.033811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.033836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.034966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.034992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.035138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.035164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.035303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.035329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.035499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.035526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.035660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.035686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.035886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.035913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.036064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.036090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 Malloc0 00:25:11.559 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.559 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:11.559 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.559 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.559 [2024-07-15 16:41:51.037192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.037223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.037399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.037426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.037570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.037597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.037733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.037759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.037902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.037929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.038922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.038948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.039091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.039117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.039258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.039284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.039428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.559 [2024-07-15 16:41:51.039454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.559 qpair failed and we were unable to recover it. 00:25:11.559 [2024-07-15 16:41:51.039612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.039638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.039794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.039820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.039969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.039995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.040092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.560 [2024-07-15 16:41:51.040157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.040184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.040344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.040369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.040509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.040540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.040699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.040725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.040916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.040942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.041950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.041976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.042132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.042158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.042290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.042316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.042460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.042485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.042628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.042654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.042785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.042811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.043911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.043938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.044124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.044321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.044502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.044652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.044822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.044996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.045200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.045360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.045531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.045713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.045910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.045935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.046930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.046956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.047082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.047107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.047253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.560 [2024-07-15 16:41:51.047278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.560 qpair failed and we were unable to recover it. 00:25:11.560 [2024-07-15 16:41:51.047404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.047429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.047557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.047582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.047740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.047765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.047904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.047931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.048065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.048094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.048272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.048297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.561 [2024-07-15 16:41:51.048420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.048445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.561 [2024-07-15 16:41:51.048597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.561 [2024-07-15 16:41:51.048623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.048752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.048779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.048946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.048993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.049166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.049191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.049333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.049358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.049514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.049539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.049698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.049723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.049862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.049894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.050052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.050077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 A controller has encountered a failure and is being reset. 00:25:11.561 [2024-07-15 16:41:51.050249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.050289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.050431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.050458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.050614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.050641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.050803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.050828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.050995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.051170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.051384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.051545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.051738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.051938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.051965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.052104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.052130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.052281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.052307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.052439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.561 [2024-07-15 16:41:51.052465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.561 qpair failed and we were unable to recover it. 00:25:11.561 [2024-07-15 16:41:51.052600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.052631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.052762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.052788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.052947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.052974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.053107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.053133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.053284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.053310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.053478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.053504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.053687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.053713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.053847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.053873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.054881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.054908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f33f0000b90 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.055921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.055947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.056092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.056117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.562 [2024-07-15 16:41:51.056285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.056311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:11.562 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.562 [2024-07-15 16:41:51.056476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.056502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.562 [2024-07-15 16:41:51.056643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.056669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.056823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.056848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.057887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.057912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.562 [2024-07-15 16:41:51.058046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.562 [2024-07-15 16:41:51.058071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.562 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.058241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.058266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.058390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.058415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.058542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.058727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.058752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.058903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.058929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.059911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.059937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.060920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.060946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.061127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.061285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.061487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.061646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.061829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.061978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.062152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.062309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.062498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.062654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.062838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.063854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.063886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.064051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.064077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.064213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.064239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.563 [2024-07-15 16:41:51.064386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.064412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.563 [2024-07-15 16:41:51.064598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.563 [2024-07-15 16:41:51.064623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.563 qpair failed and we were unable to recover it. 00:25:11.563 [2024-07-15 16:41:51.064753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.563 [2024-07-15 16:41:51.064779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.064942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.064968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.065951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.065977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.066144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.066295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.066495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.066642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.066856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.066998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.067152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.067301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.067486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.067643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.067826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.067851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.068008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.068034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.068173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.564 [2024-07-15 16:41:51.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1628200 with addr=10.0.0.2, port=4420 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 [2024-07-15 16:41:51.068218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.564 [2024-07-15 16:41:51.070764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.564 [2024-07-15 16:41:51.070918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.564 [2024-07-15 16:41:51.070945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.564 [2024-07-15 16:41:51.070960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.564 [2024-07-15 16:41:51.070973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.564 [2024-07-15 16:41:51.071006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.564 16:41:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1621490 00:25:11.564 [2024-07-15 16:41:51.080709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.564 [2024-07-15 16:41:51.080893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.564 [2024-07-15 16:41:51.080919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.564 [2024-07-15 16:41:51.080933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.564 [2024-07-15 16:41:51.080958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.564 [2024-07-15 16:41:51.080986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.564 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.090700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.090844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.090870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.090893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.090906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.090935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.100739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.100896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.100926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.100942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.100960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.100990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.110696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.110833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.110860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.110874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.110897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.110926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.120707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.120843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.120868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.120916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.120930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.120960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.130726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.130860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.130893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.130909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.130922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.130949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.140755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.140898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.140924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.140938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.140950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.140978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.150804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.150963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.150989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.151004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.151016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.151044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.160803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.160943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.160968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.160983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.160995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.161023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.170832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.170980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.824 [2024-07-15 16:41:51.171006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.824 [2024-07-15 16:41:51.171020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.824 [2024-07-15 16:41:51.171033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.824 [2024-07-15 16:41:51.171060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.824 qpair failed and we were unable to recover it. 00:25:11.824 [2024-07-15 16:41:51.180907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.824 [2024-07-15 16:41:51.181070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.181096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.181110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.181122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.181150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.190921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.191060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.191086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.191106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.191119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.191147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.200962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.201093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.201118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.201132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.201145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.201172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.210991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.211173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.211198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.211213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.211226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.211253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.220992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.221134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.221158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.221172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.221185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.221212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.231031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.231174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.231199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.231213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.231227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.231254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.241286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.241434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.241459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.241473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.241486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.241515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.251156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.251291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.251316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.251330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.251342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.251370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.261165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.261310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.261335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.261350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.261362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.261390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.271226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.271365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.271390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.271405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.271417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.271444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.281221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.281387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.281413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.281433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.281447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.281475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.291226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.291355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.291379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.291394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.291407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.291434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.301235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.301373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.301398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.301412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.301424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.301451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.311310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.311458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.825 [2024-07-15 16:41:51.311485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.825 [2024-07-15 16:41:51.311505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.825 [2024-07-15 16:41:51.311519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.825 [2024-07-15 16:41:51.311547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.825 qpair failed and we were unable to recover it. 00:25:11.825 [2024-07-15 16:41:51.321416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.825 [2024-07-15 16:41:51.321560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.321585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.321600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.321612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.321640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.331438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.331569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.331595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.331610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.331622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.331650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.341364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.341514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.341539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.341553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.341566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.341594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.351420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.351554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.351580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.351594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.351607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.351635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.361419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.361549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.361574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.361588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.361601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.361628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.371438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.371572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.371598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.371617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.371631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.371659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.381496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.381657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.381684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.381699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.381711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.381740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.391479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.391618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.391644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.391658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.391671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.391698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.401516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.401656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.401681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.401695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.401708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.401735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:11.826 [2024-07-15 16:41:51.411621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.826 [2024-07-15 16:41:51.411777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.826 [2024-07-15 16:41:51.411803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.826 [2024-07-15 16:41:51.411817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.826 [2024-07-15 16:41:51.411829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:11.826 [2024-07-15 16:41:51.411856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.826 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.421578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.421711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.421736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.421751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.421763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.421791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.431615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.431753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.431778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.431793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.431805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.431832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.441638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.441767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.441793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.441807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.441820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.441848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.451643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.451773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.451798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.451812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.451825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.451853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.461686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.461822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.461852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.461868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.461890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.461920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.471710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.471845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.471870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.471893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.471906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.471934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.481745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.481885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.481911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.481924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.481937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.481965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.491789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.491928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.491953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.491967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.491980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.492007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.501807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.501945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.086 [2024-07-15 16:41:51.501970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.086 [2024-07-15 16:41:51.501984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.086 [2024-07-15 16:41:51.501996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.086 [2024-07-15 16:41:51.502025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.086 qpair failed and we were unable to recover it. 00:25:12.086 [2024-07-15 16:41:51.511968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.086 [2024-07-15 16:41:51.512132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.512156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.512170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.512183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.512211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.521863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.522010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.522035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.522049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.522062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.522090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.531927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.532069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.532095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.532109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.532121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.532149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.541924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.542062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.542086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.542101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.542113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.542141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.551986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.552126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.552165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.552180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.552193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.552220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.562003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.562141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.562167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.562181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.562194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.562221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.572050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.572202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.572228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.572242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.572255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.572283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.582056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.582197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.582223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.582237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.582249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.582277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.592046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.592183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.592208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.592222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.592235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.592268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.602168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.602336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.602361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.602375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.602388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.602415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.612122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.612268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.612293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.612307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.612320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.612347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.622186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.622326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.622352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.622366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.622379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.622406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.632177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.632307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.632332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.632346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.632358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.632385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.642211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.642356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.642389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.642404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.087 [2024-07-15 16:41:51.642417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.087 [2024-07-15 16:41:51.642445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.087 qpair failed and we were unable to recover it. 00:25:12.087 [2024-07-15 16:41:51.652353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.087 [2024-07-15 16:41:51.652495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.087 [2024-07-15 16:41:51.652519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.087 [2024-07-15 16:41:51.652534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.088 [2024-07-15 16:41:51.652546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.088 [2024-07-15 16:41:51.652573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.088 qpair failed and we were unable to recover it. 00:25:12.088 [2024-07-15 16:41:51.662276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.088 [2024-07-15 16:41:51.662414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.088 [2024-07-15 16:41:51.662439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.088 [2024-07-15 16:41:51.662453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.088 [2024-07-15 16:41:51.662466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.088 [2024-07-15 16:41:51.662493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.088 qpair failed and we were unable to recover it. 00:25:12.088 [2024-07-15 16:41:51.672316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.088 [2024-07-15 16:41:51.672447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.088 [2024-07-15 16:41:51.672472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.088 [2024-07-15 16:41:51.672487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.088 [2024-07-15 16:41:51.672499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.088 [2024-07-15 16:41:51.672526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.088 qpair failed and we were unable to recover it. 00:25:12.088 [2024-07-15 16:41:51.682354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.682493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.682519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.682533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.682546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.682580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.692389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.692524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.692550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.692564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.692577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.692605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.702385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.702521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.702545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.702559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.702572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.702599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.712446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.712603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.712628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.712642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.712655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.712682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.722466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.722604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.722629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.722644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.722656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.722684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.732528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.732676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.732706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.732721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.732734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.732761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.742524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.742659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.742685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.742699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.742712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.742739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.752572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.752707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.752732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.752746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.752758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.347 [2024-07-15 16:41:51.752786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 16:41:51.762659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.347 [2024-07-15 16:41:51.762812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.347 [2024-07-15 16:41:51.762837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.347 [2024-07-15 16:41:51.762851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.347 [2024-07-15 16:41:51.762863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.762897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.772619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.772750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.772776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.772789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.772808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.772836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.782682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.782826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.782850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.782864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.782882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.782912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.792669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.792801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.792827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.792841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.792854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.792887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.802701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.802834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.802859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.802873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.802898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.802926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.812702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.812828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.812854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.812868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.812887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.812916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.822760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.822912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.822939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.822954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.822968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.822996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.832804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.832956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.832984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.833004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.833018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.833046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.842826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.843007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.843033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.843047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.843059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.843087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.852824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.852993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.853019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.853033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.853047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.853075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.862917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.863079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.863105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.863119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.863137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.863165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.872910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.873095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.873119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.873133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.873146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.873174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.882911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.883043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.883068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.883082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.883095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.883123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.892944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.893105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.893133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.893148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.893164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.893194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 16:41:51.902986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.348 [2024-07-15 16:41:51.903140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.348 [2024-07-15 16:41:51.903165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.348 [2024-07-15 16:41:51.903179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.348 [2024-07-15 16:41:51.903192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.348 [2024-07-15 16:41:51.903220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 16:41:51.913013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.349 [2024-07-15 16:41:51.913158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.349 [2024-07-15 16:41:51.913183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.349 [2024-07-15 16:41:51.913197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.349 [2024-07-15 16:41:51.913210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.349 [2024-07-15 16:41:51.913238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 16:41:51.923038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.349 [2024-07-15 16:41:51.923172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.349 [2024-07-15 16:41:51.923198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.349 [2024-07-15 16:41:51.923212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.349 [2024-07-15 16:41:51.923224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.349 [2024-07-15 16:41:51.923252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 16:41:51.933065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.349 [2024-07-15 16:41:51.933209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.349 [2024-07-15 16:41:51.933235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.349 [2024-07-15 16:41:51.933249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.349 [2024-07-15 16:41:51.933262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.349 [2024-07-15 16:41:51.933289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 16:41:51.943156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.349 [2024-07-15 16:41:51.943330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.349 [2024-07-15 16:41:51.943356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.349 [2024-07-15 16:41:51.943370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.349 [2024-07-15 16:41:51.943383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.349 [2024-07-15 16:41:51.943410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:51.953136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:51.953290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:51.953325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:51.953340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:51.953358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:51.953386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:51.963154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:51.963284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:51.963309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:51.963323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:51.963335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:51.963364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:51.973171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:51.973300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:51.973325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:51.973339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:51.973351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:51.973378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:51.983208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:51.983343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:51.983368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:51.983382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:51.983395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:51.983422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:51.993233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:51.993387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:51.993412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:51.993427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:51.993439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:51.993468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.003346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.003482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.003507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.003521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.003534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.003561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.013401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.013541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.013566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.013580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.013593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.013620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.023344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.023500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.023525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.023539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.023552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.023579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.033367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.033505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.033531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.033545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.033558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.033585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.043368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.043500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.043523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.043542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.043555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.043582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.053419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.053552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.053577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.053591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.053604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.053631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.063448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.610 [2024-07-15 16:41:52.063590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.610 [2024-07-15 16:41:52.063615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.610 [2024-07-15 16:41:52.063629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.610 [2024-07-15 16:41:52.063642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.610 [2024-07-15 16:41:52.063670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.610 qpair failed and we were unable to recover it. 00:25:12.610 [2024-07-15 16:41:52.073517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.073674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.073699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.073713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.073726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.073754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.083544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.083677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.083703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.083717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.083729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.083757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.093526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.093659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.093685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.093699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.093712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.093739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.103613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.103753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.103778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.103792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.103804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.103832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.113603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.113744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.113769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.113784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.113796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.113823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.123700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.123853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.123890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.123909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.123923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.123952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.133640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.133773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.133799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.133819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.133833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.133862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.143709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.143870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.143902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.143917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.143929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.143957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.153689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.153819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.153844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.153858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.153871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.153906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.163725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.163865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.163902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.163919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.163933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.163961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.173739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.173874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.173908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.173923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.173935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.173963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.183780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.183917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.183942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.183956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.183969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.183997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.193861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.194002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.194031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.194046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.194059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.194087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.611 [2024-07-15 16:41:52.203842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.611 [2024-07-15 16:41:52.203985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.611 [2024-07-15 16:41:52.204011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.611 [2024-07-15 16:41:52.204024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.611 [2024-07-15 16:41:52.204037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.611 [2024-07-15 16:41:52.204065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.611 qpair failed and we were unable to recover it. 00:25:12.871 [2024-07-15 16:41:52.213872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.871 [2024-07-15 16:41:52.214015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.871 [2024-07-15 16:41:52.214041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.871 [2024-07-15 16:41:52.214055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.871 [2024-07-15 16:41:52.214068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.871 [2024-07-15 16:41:52.214097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.871 qpair failed and we were unable to recover it. 00:25:12.871 [2024-07-15 16:41:52.223916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.871 [2024-07-15 16:41:52.224058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.871 [2024-07-15 16:41:52.224089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.871 [2024-07-15 16:41:52.224104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.871 [2024-07-15 16:41:52.224116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.871 [2024-07-15 16:41:52.224144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.871 qpair failed and we were unable to recover it. 00:25:12.871 [2024-07-15 16:41:52.233925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.871 [2024-07-15 16:41:52.234057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.871 [2024-07-15 16:41:52.234082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.871 [2024-07-15 16:41:52.234096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.871 [2024-07-15 16:41:52.234108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.871 [2024-07-15 16:41:52.234136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.871 qpair failed and we were unable to recover it. 00:25:12.871 [2024-07-15 16:41:52.243991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.871 [2024-07-15 16:41:52.244125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.871 [2024-07-15 16:41:52.244150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.871 [2024-07-15 16:41:52.244164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.244177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.244205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.254025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.254153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.254178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.254193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.254205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.254232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.264044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.264188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.264212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.264226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.264239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.264266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.274044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.274180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.274205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.274219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.274232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.274259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.284122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.284256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.284281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.284295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.284308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.284335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.294117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.294259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.294284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.294297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.294310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.294338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.304151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.304292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.304318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.304332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.304344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.304372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.314168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.314315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.314345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.314360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.314372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.314400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.324198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.324334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.324359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.324373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.324386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.324413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.334205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.334369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.334394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.334408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.334421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.334448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.344258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.344397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.344421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.344435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.344447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.344475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.354338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.354507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.354532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.354545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.354558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.354591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.364338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.364516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.872 [2024-07-15 16:41:52.364542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.872 [2024-07-15 16:41:52.364562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.872 [2024-07-15 16:41:52.364576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.872 [2024-07-15 16:41:52.364605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.872 qpair failed and we were unable to recover it. 00:25:12.872 [2024-07-15 16:41:52.374361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.872 [2024-07-15 16:41:52.374535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.374561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.374575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.374588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.374616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.384425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.384596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.384622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.384636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.384648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.384676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.394409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.394549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.394575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.394590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.394602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.394630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.404435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.404576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.404610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.404626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.404638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.404667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.414451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.414582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.414608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.414622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.414635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.414662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.424581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.424793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.424820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.424834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.424851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.424889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.434553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.434729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.434755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.434769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.434782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.434810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.444567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.444695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.444720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.444734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.444747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.444781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.454609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.454741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.454767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.454781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.454793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.454821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:12.873 [2024-07-15 16:41:52.464715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.873 [2024-07-15 16:41:52.464866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.873 [2024-07-15 16:41:52.464900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.873 [2024-07-15 16:41:52.464915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.873 [2024-07-15 16:41:52.464928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:12.873 [2024-07-15 16:41:52.464955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.873 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.474645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.474785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.474811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.474825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.474838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.474866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.484654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.484794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.484820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.484835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.484847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.484883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.494699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.494832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.494862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.494885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.494900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.494928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.504760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.504923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.504957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.504972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.504984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.505013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.514752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.514893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.514919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.514933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.514944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.514973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.524796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.524937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.524963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.524977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.524990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.525019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.534848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.535014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.535039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.535054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.535072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.535100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.544946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.545095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.545120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.545134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.545146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.545175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.554872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.555019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.555045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.555060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.555072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.555100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.564918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.565069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.565095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.565109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.565122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.565149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.574951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.575086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.575112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.575126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.575139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.575167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.584965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.585110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.134 [2024-07-15 16:41:52.585136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.134 [2024-07-15 16:41:52.585150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.134 [2024-07-15 16:41:52.585163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.134 [2024-07-15 16:41:52.585190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 16:41:52.595066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.134 [2024-07-15 16:41:52.595204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.595229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.595244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.595257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.595284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.605030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.605217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.605242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.605256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.605270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.605298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.615099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.615243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.615268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.615282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.615295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.615322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.625216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.625376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.625402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.625416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.625434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.625462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.635140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.635299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.635325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.635339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.635352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.635379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.645134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.645302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.645327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.645341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.645353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.645381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.655153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.655287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.655312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.655326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.655338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.655366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.665236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.665376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.665401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.665415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.665427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.665455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.675280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.675433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.675458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.675473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.675485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.675513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.685241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.685378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.685403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.685417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.685430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.685459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.695308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.695440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.695465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.695479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.695491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.695519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.705431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.705572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.705596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.705610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.705623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.705650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.715336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.715468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.715494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.715507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.715525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.715554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 16:41:52.725410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.135 [2024-07-15 16:41:52.725545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.135 [2024-07-15 16:41:52.725570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.135 [2024-07-15 16:41:52.725584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.135 [2024-07-15 16:41:52.725596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.135 [2024-07-15 16:41:52.725624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.735435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.735611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.735637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.735651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.735664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.735691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.745466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.745604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.745629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.745643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.745656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.745683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.755476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.755671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.755696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.755710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.755723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.755750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.765480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.765609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.765634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.765648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.765661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.765690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.775539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.775715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.775741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.775755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.775768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.775795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.785551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.785687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.785712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.785726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.785739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.785766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.795574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.795751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.795776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.795790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.795803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.795830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.805645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.805811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.805836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.805858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.805871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.805907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.815640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.815781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.815806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.815820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.815833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.815860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.825695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.825882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.825907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.825922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.825934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.825962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.835690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.835865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.835898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.835913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.835926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.835954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.845744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.845890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.845916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.845930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.845942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.845970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.855747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.855889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.855916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.855930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.855941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.855969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.865810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.395 [2024-07-15 16:41:52.865996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.395 [2024-07-15 16:41:52.866021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.395 [2024-07-15 16:41:52.866034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.395 [2024-07-15 16:41:52.866047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.395 [2024-07-15 16:41:52.866075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.395 qpair failed and we were unable to recover it. 00:25:13.395 [2024-07-15 16:41:52.875805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.875948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.875974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.875989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.876001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.876030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.885824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.885996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.886022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.886036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.886048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.886076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.895951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.896083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.896109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.896128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.896141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.896169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.905911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.906052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.906078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.906092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.906104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.906132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.915939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.916096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.916122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.916137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.916150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.916177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.925956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.926096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.926121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.926135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.926148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.926175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.935962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.936092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.936117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.936131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.936144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.936171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.946036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.946187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.946211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.946225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.946238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.946265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.956036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.956174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.956199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.956213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.956226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.956253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.966062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.966193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.966219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.966232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.966245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.966273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.976127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.976284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.976311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.976329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.976341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.976370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.396 [2024-07-15 16:41:52.986151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.396 [2024-07-15 16:41:52.986291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.396 [2024-07-15 16:41:52.986317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.396 [2024-07-15 16:41:52.986338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.396 [2024-07-15 16:41:52.986351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.396 [2024-07-15 16:41:52.986380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.396 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:52.996188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:52.996333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:52.996359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:52.996374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:52.996386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:52.996413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.006274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.006408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.006433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.006447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.006460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.006488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.016236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.016373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.016399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.016413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.016426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.016454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.026252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.026394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.026419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.026435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.026448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.026475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.036265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.036404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.036429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.036443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.036456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.036484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.046336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.046478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.046504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.046523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.046535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.046563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.056344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.056522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.056549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.056564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.056576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.056604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.066421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.066585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.066611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.066625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.066638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.656 [2024-07-15 16:41:53.066665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.656 qpair failed and we were unable to recover it. 00:25:13.656 [2024-07-15 16:41:53.076398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.656 [2024-07-15 16:41:53.076558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.656 [2024-07-15 16:41:53.076588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.656 [2024-07-15 16:41:53.076603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.656 [2024-07-15 16:41:53.076616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.076643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.086392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.086535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.086560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.086574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.086587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.086614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.096430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.096558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.096583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.096598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.096611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.096639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.106454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.106615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.106641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.106655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.106667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.106695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.116498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.116676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.116702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.116716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.116728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.116762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.126529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.126662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.126687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.126702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.126714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.126742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.136530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.136700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.136726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.136740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.136753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.136780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.146600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.146772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.146797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.146810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.146823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.146851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.156594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.156773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.156798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.156812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.156825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.156852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.166668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.166801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.166835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.166850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.166863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.166898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.176640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.176770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.176794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.176808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.176821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.176848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.186793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.186937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.186961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.186976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.186988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.187016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.196693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.196883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.196908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.196923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.196935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.196963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.206782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.206934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.206960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.206974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.206987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.657 [2024-07-15 16:41:53.207020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.657 qpair failed and we were unable to recover it. 00:25:13.657 [2024-07-15 16:41:53.216770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.657 [2024-07-15 16:41:53.216915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.657 [2024-07-15 16:41:53.216940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.657 [2024-07-15 16:41:53.216955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.657 [2024-07-15 16:41:53.216967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.658 [2024-07-15 16:41:53.216995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.658 qpair failed and we were unable to recover it. 00:25:13.658 [2024-07-15 16:41:53.226804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.658 [2024-07-15 16:41:53.226942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.658 [2024-07-15 16:41:53.226968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.658 [2024-07-15 16:41:53.226982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.658 [2024-07-15 16:41:53.226994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.658 [2024-07-15 16:41:53.227022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.658 qpair failed and we were unable to recover it. 00:25:13.658 [2024-07-15 16:41:53.236833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.658 [2024-07-15 16:41:53.236977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.658 [2024-07-15 16:41:53.237003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.658 [2024-07-15 16:41:53.237017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.658 [2024-07-15 16:41:53.237030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.658 [2024-07-15 16:41:53.237057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.658 qpair failed and we were unable to recover it. 00:25:13.658 [2024-07-15 16:41:53.246864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.658 [2024-07-15 16:41:53.247018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.658 [2024-07-15 16:41:53.247043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.658 [2024-07-15 16:41:53.247057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.658 [2024-07-15 16:41:53.247069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.658 [2024-07-15 16:41:53.247097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.658 qpair failed and we were unable to recover it. 00:25:13.917 [2024-07-15 16:41:53.256897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.917 [2024-07-15 16:41:53.257037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.917 [2024-07-15 16:41:53.257068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.917 [2024-07-15 16:41:53.257083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.917 [2024-07-15 16:41:53.257096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.917 [2024-07-15 16:41:53.257123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.917 qpair failed and we were unable to recover it. 00:25:13.917 [2024-07-15 16:41:53.266919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.917 [2024-07-15 16:41:53.267054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.917 [2024-07-15 16:41:53.267079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.917 [2024-07-15 16:41:53.267093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.917 [2024-07-15 16:41:53.267106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.917 [2024-07-15 16:41:53.267133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.917 qpair failed and we were unable to recover it. 00:25:13.917 [2024-07-15 16:41:53.276992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.917 [2024-07-15 16:41:53.277130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.917 [2024-07-15 16:41:53.277155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.917 [2024-07-15 16:41:53.277169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.917 [2024-07-15 16:41:53.277181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.917 [2024-07-15 16:41:53.277209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.917 qpair failed and we were unable to recover it. 00:25:13.917 [2024-07-15 16:41:53.286990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.917 [2024-07-15 16:41:53.287123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.917 [2024-07-15 16:41:53.287148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.917 [2024-07-15 16:41:53.287163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.917 [2024-07-15 16:41:53.287176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.917 [2024-07-15 16:41:53.287204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.917 qpair failed and we were unable to recover it. 00:25:13.917 [2024-07-15 16:41:53.297019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.917 [2024-07-15 16:41:53.297150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.917 [2024-07-15 16:41:53.297175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.917 [2024-07-15 16:41:53.297189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.917 [2024-07-15 16:41:53.297201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.917 [2024-07-15 16:41:53.297234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.917 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.307035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.307199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.307224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.307238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.307250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.307278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.317092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.317271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.317296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.317310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.317323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.317350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.327088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.327228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.327253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.327267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.327280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.327307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.337150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.337284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.337309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.337323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.337336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.337363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.347184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.347332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.347362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.347376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.347389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.347417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.357222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.357383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.357408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.357422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.357435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.357463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.367326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.367489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.367514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.367528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.367541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.367569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.377224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.377362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.377387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.377402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.377414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.377442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.387282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.387419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.387444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.387458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.387476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.387504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.397371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.397506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.397531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.397545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.397557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.397586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.407304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.407435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.407460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.407474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.407487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.407514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.417335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.417474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.417499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.417513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.417524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.417553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.427497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.427639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.427663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.427677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.427690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.427717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.437393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.918 [2024-07-15 16:41:53.437540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.918 [2024-07-15 16:41:53.437565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.918 [2024-07-15 16:41:53.437579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.918 [2024-07-15 16:41:53.437591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.918 [2024-07-15 16:41:53.437619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.918 qpair failed and we were unable to recover it. 00:25:13.918 [2024-07-15 16:41:53.447447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.447583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.447607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.447621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.447634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.447661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.457499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.457638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.457663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.457677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.457690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.457718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.467480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.467618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.467643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.467657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.467670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.467697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.477494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.477633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.477658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.477672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.477690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.477721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.487627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.487765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.487789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.487803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.487816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.487844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.497588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.497721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.497746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.497760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.497773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.497800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:13.919 [2024-07-15 16:41:53.507591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.919 [2024-07-15 16:41:53.507730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.919 [2024-07-15 16:41:53.507754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.919 [2024-07-15 16:41:53.507768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.919 [2024-07-15 16:41:53.507780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:13.919 [2024-07-15 16:41:53.507808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.919 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.517669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.517841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.517867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.517891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.517907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.517935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.527665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.527806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.527831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.527846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.527858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.527893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.537699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.537829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.537854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.537869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.537890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.537918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.547733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.547905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.547930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.547944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.547957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.547984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.557819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.557953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.557979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.557994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.558007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.558034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.567761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.567906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.567931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.567951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.567965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.567993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.577780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.577922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.577948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.577962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.577975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.578003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.587923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.588109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.588134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.588148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.588161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.588188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.597890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.598038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.598065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.598084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.598098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.598126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.607872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.608061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.608088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.608102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.608114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.608143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.617983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.618186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.618212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.180 [2024-07-15 16:41:53.618226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.180 [2024-07-15 16:41:53.618239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.180 [2024-07-15 16:41:53.618266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.180 qpair failed and we were unable to recover it. 00:25:14.180 [2024-07-15 16:41:53.627939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.180 [2024-07-15 16:41:53.628076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.180 [2024-07-15 16:41:53.628101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.628116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.628129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.628157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.638012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.638165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.638191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.638205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.638218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.638246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.648003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.648139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.648165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.648190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.648204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.648232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.658021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.658150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.658176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.658197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.658210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.658238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.668120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.668268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.668292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.668306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.668326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.668353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.678111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.678257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.678282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.678296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.678310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.678337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.688202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.688335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.688361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.688375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.688387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.688415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.698158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.698302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.698327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.698342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.698355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.698382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.708204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.708390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.708415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.708429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.708441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.708469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.718183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.718316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.718341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.718356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.718368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.718395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.728305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.728444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.728471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.728491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.728504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.728532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.738233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.738371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.738397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.738412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.738424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.738452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.748292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.748427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.748452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.748473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.748486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.748514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.758425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.758582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.758610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.758625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.758638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.181 [2024-07-15 16:41:53.758667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.181 qpair failed and we were unable to recover it. 00:25:14.181 [2024-07-15 16:41:53.768375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.181 [2024-07-15 16:41:53.768511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.181 [2024-07-15 16:41:53.768537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.181 [2024-07-15 16:41:53.768551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.181 [2024-07-15 16:41:53.768564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.182 [2024-07-15 16:41:53.768592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.182 qpair failed and we were unable to recover it. 00:25:14.442 [2024-07-15 16:41:53.778374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.442 [2024-07-15 16:41:53.778512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.442 [2024-07-15 16:41:53.778538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.442 [2024-07-15 16:41:53.778552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.442 [2024-07-15 16:41:53.778564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.442 [2024-07-15 16:41:53.778592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.442 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.788391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.788532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.788557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.788571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.788584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.788611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.798408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.798543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.798568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.798581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.798595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.798622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.808507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.808672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.808697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.808711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.808724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.808751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.818463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.818599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.818625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.818639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.818652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.818679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.828591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.828745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.828770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.828785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.828798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.828825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.838578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.838711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.838741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.838755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.838768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.838795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.848615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.848743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.848768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.848782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.848795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.848822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.858594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.858728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.858753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.858767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.858779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.858808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.868630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.868782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.868807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.868821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.868833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.868861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.878704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.878865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.878900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.878915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.878927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.878955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.888679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.888809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.888834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.888848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.888861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.888896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.898716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.898845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.898870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.898892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.898906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.898934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.908810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.908958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.908983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.908997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.909010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.909038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.918768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.918912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.918937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.443 [2024-07-15 16:41:53.918952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.443 [2024-07-15 16:41:53.918964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.443 [2024-07-15 16:41:53.918992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.443 qpair failed and we were unable to recover it. 00:25:14.443 [2024-07-15 16:41:53.928821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.443 [2024-07-15 16:41:53.928960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.443 [2024-07-15 16:41:53.928993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.929008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.929021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.929049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.938847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.939035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.939061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.939075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.939087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.939114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.948922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.949062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.949087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.949101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.949113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.949141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.958890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.959023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.959047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.959062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.959074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.959102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.968946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.969081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.969107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.969120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.969133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.969166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.978945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.979074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.979099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.979113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.979125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.979152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.988991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.989131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.989156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.989170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.989183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.989210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:53.999016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:53.999171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:53.999198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:53.999213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:53.999225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:53.999253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:54.009124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:54.009257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:54.009282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:54.009296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:54.009309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:54.009337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:54.019068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:54.019219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:54.019249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:54.019264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:54.019277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:54.019305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.444 [2024-07-15 16:41:54.029096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.444 [2024-07-15 16:41:54.029264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.444 [2024-07-15 16:41:54.029290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.444 [2024-07-15 16:41:54.029304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.444 [2024-07-15 16:41:54.029317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.444 [2024-07-15 16:41:54.029344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.444 qpair failed and we were unable to recover it. 00:25:14.706 [2024-07-15 16:41:54.039115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.706 [2024-07-15 16:41:54.039264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.706 [2024-07-15 16:41:54.039290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.706 [2024-07-15 16:41:54.039304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.706 [2024-07-15 16:41:54.039317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.706 [2024-07-15 16:41:54.039344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.706 qpair failed and we were unable to recover it. 00:25:14.706 [2024-07-15 16:41:54.049155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.706 [2024-07-15 16:41:54.049301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.706 [2024-07-15 16:41:54.049326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.706 [2024-07-15 16:41:54.049339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.706 [2024-07-15 16:41:54.049351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.706 [2024-07-15 16:41:54.049378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.706 qpair failed and we were unable to recover it. 00:25:14.706 [2024-07-15 16:41:54.059159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.706 [2024-07-15 16:41:54.059290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.706 [2024-07-15 16:41:54.059315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.706 [2024-07-15 16:41:54.059329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.706 [2024-07-15 16:41:54.059342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.706 [2024-07-15 16:41:54.059377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.706 qpair failed and we were unable to recover it. 00:25:14.706 [2024-07-15 16:41:54.069204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.706 [2024-07-15 16:41:54.069348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.706 [2024-07-15 16:41:54.069374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.706 [2024-07-15 16:41:54.069389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.706 [2024-07-15 16:41:54.069401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.706 [2024-07-15 16:41:54.069429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.706 qpair failed and we were unable to recover it. 00:25:14.706 [2024-07-15 16:41:54.079220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.706 [2024-07-15 16:41:54.079403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.079429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.079444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.079456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.079486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.089255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.089388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.089413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.089427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.089440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.089467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.099278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.099415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.099441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.099456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.099468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.099495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.109306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.109448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.109479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.109496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.109508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.109536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.119362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.119500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.119526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.119540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.119553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.119580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.129362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.129507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.129532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.129546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.129560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.129589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.139390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.139520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.139545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.139559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.139571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.139601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.149462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.149621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.149646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.149659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.149677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.149705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.159442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.159577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.159603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.159617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.159630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.159657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.169471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.169609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.169634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.169648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.169661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.169688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.179519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.179649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.179674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.179688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.179700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.179727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.189573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.189720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.189745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.189759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.189772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.189799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.199574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.199715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.199740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.199754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.199767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.199793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.209589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.707 [2024-07-15 16:41:54.209760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.707 [2024-07-15 16:41:54.209785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.707 [2024-07-15 16:41:54.209800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.707 [2024-07-15 16:41:54.209812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.707 [2024-07-15 16:41:54.209840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.707 qpair failed and we were unable to recover it. 00:25:14.707 [2024-07-15 16:41:54.219744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.219882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.219908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.219922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.219935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.219963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.229788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.229941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.229965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.229979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.229992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.230020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.239681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.239813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.239838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.239852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.239870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.239906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.249704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.249884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.249910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.249924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.249938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.249966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.259724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.259860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.259895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.259910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.259923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.259951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.269777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.269934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.269960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.269974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.269986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.270015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.279821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.279957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.279982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.279997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.280009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.280036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.289891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.290036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.290062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.290076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.290089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.290117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.708 [2024-07-15 16:41:54.299856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.708 [2024-07-15 16:41:54.299996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.708 [2024-07-15 16:41:54.300022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.708 [2024-07-15 16:41:54.300035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.708 [2024-07-15 16:41:54.300048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.708 [2024-07-15 16:41:54.300075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.708 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.309897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.310048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.310073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.310087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.310100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.310127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.319933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.320076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.320101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.320115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.320128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.320156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.329960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.330118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.330145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.330160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.330183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.330213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.339961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.340093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.340118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.340133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.340146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.340173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.350024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.350175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.350200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.350214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.350227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.350255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.360018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.360184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.360209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.360223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.360235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.360263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.370056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.370186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.370212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.370226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.370239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.370266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.380079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.380211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.380236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.380250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.380263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.380290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.390235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.390374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.390399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.390413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.390425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.390452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.400160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.400293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.400318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.400332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.400344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.400372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.410163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.969 [2024-07-15 16:41:54.410288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.969 [2024-07-15 16:41:54.410313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.969 [2024-07-15 16:41:54.410326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.969 [2024-07-15 16:41:54.410339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.969 [2024-07-15 16:41:54.410366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.969 qpair failed and we were unable to recover it. 00:25:14.969 [2024-07-15 16:41:54.420179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.420304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.420328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.420348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.420362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.420390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.430224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.430380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.430405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.430419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.430432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.430459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.440261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.440406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.440431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.440445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.440457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.440485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.450296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.450437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.450462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.450476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.450489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.450517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.460296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.460425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.460450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.460464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.460477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.460504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.470328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.470466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.470491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.470505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.470518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.470546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.480381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.480511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.480537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.480551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.480563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.480590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.490373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.490512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.490537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.490551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.490564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.490591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.500453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.500601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.500626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.500640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.500652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.500680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.510502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.510683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.510708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.510728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.510741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.510769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.520482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.520622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.520647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.520661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.520674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.520702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.530530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.530659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.530684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.530698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.530710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.530738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.540533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.540665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.540691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.540705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.540718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.540746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.550578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.550713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.550738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.550753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.970 [2024-07-15 16:41:54.550765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.970 [2024-07-15 16:41:54.550794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.970 qpair failed and we were unable to recover it. 00:25:14.970 [2024-07-15 16:41:54.560600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.970 [2024-07-15 16:41:54.560740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.970 [2024-07-15 16:41:54.560765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.970 [2024-07-15 16:41:54.560780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.971 [2024-07-15 16:41:54.560792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:14.971 [2024-07-15 16:41:54.560820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.971 qpair failed and we were unable to recover it. 00:25:15.230 [2024-07-15 16:41:54.570718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.230 [2024-07-15 16:41:54.570853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.230 [2024-07-15 16:41:54.570886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.230 [2024-07-15 16:41:54.570902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.230 [2024-07-15 16:41:54.570915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.230 [2024-07-15 16:41:54.570952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.230 qpair failed and we were unable to recover it. 00:25:15.230 [2024-07-15 16:41:54.580656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.580793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.580819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.580833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.580845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.580873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.590702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.590841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.590866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.590888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.590905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.590933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.600712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.600846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.600883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.600901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.600914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.600942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.610740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.610871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.610903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.610917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.610930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.610957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.620758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.620894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.620919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.620933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.620945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.620973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.630806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.630947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.630972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.630986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.630999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.631027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.640824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.640965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.640990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.641004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.641017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.641044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.650839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.651029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.651055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.651069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.651081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.651109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.660902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.661042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.661067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.661081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.661094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.661121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.670915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.671055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.671081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.671095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.671107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.671135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.681010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.681179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.681206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.681221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.681233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.681262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.690960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.691095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.691129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.691144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.691157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.691184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.701006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.701137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.701163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.701177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.701190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.701217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.711142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.231 [2024-07-15 16:41:54.711292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.231 [2024-07-15 16:41:54.711318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.231 [2024-07-15 16:41:54.711332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.231 [2024-07-15 16:41:54.711345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.231 [2024-07-15 16:41:54.711373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.231 qpair failed and we were unable to recover it. 00:25:15.231 [2024-07-15 16:41:54.721080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.721214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.721239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.721253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.721266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.721297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.731090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.731225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.731250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.731264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.731277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.731311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.741202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.741333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.741358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.741373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.741385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.741412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.751150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.751294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.751318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.751332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.751345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.751372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.761182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.761315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.761340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.761354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.761367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.761395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.771225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.771353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.771377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.771391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.771404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.771431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.781332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.781472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.781503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.781518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.781531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.781558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.791255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.791392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.791417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.791431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.791444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.791471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.801305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.801455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.801480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.801494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.801507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.801535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.811341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.811476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.811501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.811516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.811529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.811557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.232 [2024-07-15 16:41:54.821372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.232 [2024-07-15 16:41:54.821501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.232 [2024-07-15 16:41:54.821527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.232 [2024-07-15 16:41:54.821541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.232 [2024-07-15 16:41:54.821554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.232 [2024-07-15 16:41:54.821588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.232 qpair failed and we were unable to recover it. 00:25:15.491 [2024-07-15 16:41:54.831430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.491 [2024-07-15 16:41:54.831570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.491 [2024-07-15 16:41:54.831595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.491 [2024-07-15 16:41:54.831608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.491 [2024-07-15 16:41:54.831621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.491 [2024-07-15 16:41:54.831649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.491 qpair failed and we were unable to recover it. 00:25:15.491 [2024-07-15 16:41:54.841399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.491 [2024-07-15 16:41:54.841540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.491 [2024-07-15 16:41:54.841565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.491 [2024-07-15 16:41:54.841580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.491 [2024-07-15 16:41:54.841592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.491 [2024-07-15 16:41:54.841620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.491 qpair failed and we were unable to recover it. 00:25:15.491 [2024-07-15 16:41:54.851450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.491 [2024-07-15 16:41:54.851582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.491 [2024-07-15 16:41:54.851608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.491 [2024-07-15 16:41:54.851622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.491 [2024-07-15 16:41:54.851635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.491 [2024-07-15 16:41:54.851662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.861442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.861572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.861597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.861611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.861624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.861651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.871478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.871648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.871678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.871693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.871706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.871733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.881512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.881647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.881673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.881687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.881700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.881727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.891555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.891751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.891778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.891793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.891806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.891835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.901574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.901709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.901734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.901748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.901761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.901788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.911614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.911763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.911789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.911804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.911822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.911850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.921677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.921818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.921845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.921864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.921884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.921918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.931742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.931886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.931912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.931926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.931939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.931967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.941698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.941829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.941855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.941869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.941889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.941918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.951734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.951869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.951901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.951916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.951928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.951956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.961797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.961944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.961969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.961984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.961996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.962024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.971775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.971920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.971945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.971959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.971971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.971999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.981809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.981947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.981972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.981986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.981998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.982025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.492 qpair failed and we were unable to recover it. 00:25:15.492 [2024-07-15 16:41:54.991934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.492 [2024-07-15 16:41:54.992074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.492 [2024-07-15 16:41:54.992099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.492 [2024-07-15 16:41:54.992113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.492 [2024-07-15 16:41:54.992125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.492 [2024-07-15 16:41:54.992153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.001881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.002015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.002041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.002055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.002073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.002101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.011917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.012075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.012100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.012114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.012127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.012155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.021939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.022072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.022097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.022111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.022124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.022151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.031958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.032094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.032119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.032133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.032144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.032172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.041970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.042104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.042130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.042144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.042156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.042184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.051998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.052135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.052159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.052173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.052185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.052212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.062106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.062234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.062260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.062274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.062287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.062315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.072088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.072253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.072279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.072293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.072306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.072333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.493 [2024-07-15 16:41:55.082173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.493 [2024-07-15 16:41:55.082313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.493 [2024-07-15 16:41:55.082339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.493 [2024-07-15 16:41:55.082353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.493 [2024-07-15 16:41:55.082365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.493 [2024-07-15 16:41:55.082394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.493 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.092122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.092260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.092286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.092300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.092319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.092348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.102231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.102366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.102392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.102406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.102419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.102446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.112195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.112334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.112359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.112373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.112385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.112413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.122216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.122363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.122388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.122402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.122415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.122442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.132238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.132375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.132400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.132413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.132427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.132455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.142244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.752 [2024-07-15 16:41:55.142381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.752 [2024-07-15 16:41:55.142406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.752 [2024-07-15 16:41:55.142420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.752 [2024-07-15 16:41:55.142433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.752 [2024-07-15 16:41:55.142460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.752 qpair failed and we were unable to recover it. 00:25:15.752 [2024-07-15 16:41:55.152386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.152526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.152551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.152565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.152578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.152605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.162348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.162482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.162507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.162521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.162534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.162561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.172392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.172529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.172555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.172569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.172582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.172609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.182375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.182504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.182530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.182550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.182563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.182591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.192522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.192658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.192683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.192697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.192710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.192737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.202428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.202561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.202586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.202600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.202613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.202641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.212459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.212587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.212613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.212627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.212639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.212667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.222496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.222629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.222655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.222669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.222682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.222709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.232549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.232705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.232730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.232744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.232756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.232783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.242606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.242767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.242793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.242807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.242821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.242849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.252576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.252707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.252732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.252747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.252760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.252787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.262680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.262838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.262863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.262888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.262903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.262931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.272647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.272786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.272811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.272831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.272844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.272872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.282675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.282815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.753 [2024-07-15 16:41:55.282840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.753 [2024-07-15 16:41:55.282854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.753 [2024-07-15 16:41:55.282867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.753 [2024-07-15 16:41:55.282900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.753 qpair failed and we were unable to recover it. 00:25:15.753 [2024-07-15 16:41:55.292710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.753 [2024-07-15 16:41:55.292856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.292887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.292902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.292915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.292943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:15.754 [2024-07-15 16:41:55.302730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.754 [2024-07-15 16:41:55.302868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.302907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.302922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.302934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.302962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:15.754 [2024-07-15 16:41:55.312770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.754 [2024-07-15 16:41:55.312921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.312947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.312961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.312973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.313001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:15.754 [2024-07-15 16:41:55.322803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.754 [2024-07-15 16:41:55.322948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.322973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.322987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.323000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.323028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:15.754 [2024-07-15 16:41:55.332917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.754 [2024-07-15 16:41:55.333063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.333088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.333102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.333115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.333143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:15.754 [2024-07-15 16:41:55.342829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.754 [2024-07-15 16:41:55.342991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.754 [2024-07-15 16:41:55.343017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.754 [2024-07-15 16:41:55.343031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.754 [2024-07-15 16:41:55.343043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:15.754 [2024-07-15 16:41:55.343070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:15.754 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.352922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.353068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.353093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.353107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.353120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.353148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.362942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.363141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.363177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.363197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.363210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.363238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.373016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.373166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.373190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.373205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.373217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.373245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.383011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.383144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.383169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.383183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.383196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.383226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.392994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.393130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.393155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.393170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.393182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.393210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.403030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.403237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.403265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.403280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.403296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.403326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.413051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.413184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.413209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.013 [2024-07-15 16:41:55.413224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.013 [2024-07-15 16:41:55.413236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.013 [2024-07-15 16:41:55.413264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.013 qpair failed and we were unable to recover it. 00:25:16.013 [2024-07-15 16:41:55.423121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.013 [2024-07-15 16:41:55.423288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.013 [2024-07-15 16:41:55.423316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.423331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.423344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.423372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.433113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.433250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.433276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.433290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.433303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.433330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.443144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.443351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.443377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.443392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.443408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.443438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.453146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.453282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.453316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.453331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.453344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.453372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.463176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.463303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.463329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.463344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.463356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.463383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.473240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.473429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.473454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.473469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.473481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.473508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.483240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.483417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.483442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.483457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.483469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.483497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.493266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.493393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.493418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.493432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.493444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.493477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.503352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.503489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.503514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.503528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.503541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.503568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.513412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.513550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.513576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.513590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.513602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.513630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.523351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.523489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.523514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.523528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.523541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.523569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.533382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.533558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.533583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.533597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.533609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.533638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.543475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.543611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.543642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.543657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.014 [2024-07-15 16:41:55.543670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.014 [2024-07-15 16:41:55.543697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.014 qpair failed and we were unable to recover it. 00:25:16.014 [2024-07-15 16:41:55.553460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.014 [2024-07-15 16:41:55.553598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.014 [2024-07-15 16:41:55.553623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.014 [2024-07-15 16:41:55.553637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.553650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.553678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.015 [2024-07-15 16:41:55.563451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.015 [2024-07-15 16:41:55.563621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.015 [2024-07-15 16:41:55.563646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.015 [2024-07-15 16:41:55.563659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.563672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.563700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.015 [2024-07-15 16:41:55.573481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.015 [2024-07-15 16:41:55.573638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.015 [2024-07-15 16:41:55.573663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.015 [2024-07-15 16:41:55.573679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.573692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.573719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.015 [2024-07-15 16:41:55.583548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.015 [2024-07-15 16:41:55.583677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.015 [2024-07-15 16:41:55.583703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.015 [2024-07-15 16:41:55.583717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.583729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.583763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.015 [2024-07-15 16:41:55.593629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.015 [2024-07-15 16:41:55.593764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.015 [2024-07-15 16:41:55.593789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.015 [2024-07-15 16:41:55.593802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.593815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.593842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.015 [2024-07-15 16:41:55.603557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.015 [2024-07-15 16:41:55.603693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.015 [2024-07-15 16:41:55.603718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.015 [2024-07-15 16:41:55.603733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.015 [2024-07-15 16:41:55.603745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.015 [2024-07-15 16:41:55.603773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.015 qpair failed and we were unable to recover it. 00:25:16.274 [2024-07-15 16:41:55.613653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.274 [2024-07-15 16:41:55.613828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.274 [2024-07-15 16:41:55.613853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.274 [2024-07-15 16:41:55.613867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.274 [2024-07-15 16:41:55.613886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.274 [2024-07-15 16:41:55.613916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.274 qpair failed and we were unable to recover it. 00:25:16.274 [2024-07-15 16:41:55.623674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.274 [2024-07-15 16:41:55.623810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.623836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.623857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.623870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.623909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.633672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.633844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.633882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.633900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.633913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.633942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.643713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.643847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.643873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.643899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.643912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.643940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.653743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.653916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.653942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.653956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.653969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.653997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.663734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.663866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.663898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.663912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.663925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.663953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.673789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.673961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.673988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.674002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.674018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.674054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.683896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.684043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.684069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.684084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.684097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.684124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.693862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.694017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.694043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.694057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.694070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.694098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.703864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.704029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.704055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.704069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.704082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.704109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.714000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.714139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.714164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.714178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.714191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.714219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.724000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.724140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.724170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.724185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.724197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.724225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.734014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.734208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.734233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.734247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.734260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.734288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.275 [2024-07-15 16:41:55.743966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.275 [2024-07-15 16:41:55.744100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.275 [2024-07-15 16:41:55.744126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.275 [2024-07-15 16:41:55.744140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.275 [2024-07-15 16:41:55.744153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.275 [2024-07-15 16:41:55.744181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.275 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.754028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.754165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.754190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.754204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.754217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.754244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.764034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.764162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.764187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.764201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.764219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.764247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.774068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.774203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.774228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.774243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.774255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.774282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.784118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.784251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.784276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.784291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.784303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.784331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.794147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.794280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.794305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.794320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.794332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.794360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.804154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.804285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.804310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.804324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.804337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.804364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.814215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.814403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.814428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.814442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.814455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.814482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.824246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.824401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.824426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.824439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.824451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.824479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.834316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.834458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.834483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.834496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.834509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.834536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.844341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.844498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.844525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.844540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.844556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.844586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.854342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.854512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.854538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.854552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.854570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.854599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.276 [2024-07-15 16:41:55.864327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.276 [2024-07-15 16:41:55.864459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.276 [2024-07-15 16:41:55.864484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.276 [2024-07-15 16:41:55.864500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.276 [2024-07-15 16:41:55.864512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.276 [2024-07-15 16:41:55.864540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.276 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.874421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.874580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.874606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.874620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.874633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.874661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.884408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.884553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.884579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.884593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.884606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.884633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.894425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.894554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.894580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.894594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.894606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.894634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.904569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.904719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.904745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.904759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.904772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.904800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.914495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.914634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.914659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.914673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.914686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.914713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.924538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.924675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.924700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.924714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.924727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.924754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.934550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.934698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.934723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.934738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.934750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.934778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.944580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.944726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.944752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.944772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.538 [2024-07-15 16:41:55.944786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.538 [2024-07-15 16:41:55.944814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.538 qpair failed and we were unable to recover it. 00:25:16.538 [2024-07-15 16:41:55.954623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.538 [2024-07-15 16:41:55.954802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.538 [2024-07-15 16:41:55.954826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.538 [2024-07-15 16:41:55.954840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:55.954854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:55.954890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:55.964652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:55.964785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:55.964810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:55.964825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:55.964838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:55.964865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:55.974649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:55.974782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:55.974807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:55.974822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:55.974835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:55.974862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:55.984743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:55.984881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:55.984907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:55.984921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:55.984933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:55.984960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:55.994811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:55.994960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:55.994986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:55.995000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:55.995013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:55.995040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.004771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.004914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.004942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.004962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.004975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.005004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.014814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.014957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.014983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.014997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.015010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.015038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.024785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.024927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.024953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.024967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.024980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.025007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.034842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.035027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.035052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.035072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.035085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.035113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.044846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.045038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.045064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.045079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.045092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.045119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.054939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.055079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.055104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.055118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.055130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.055157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.064935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.065086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.065112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.065126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.065139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.065167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.074964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.075102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.539 [2024-07-15 16:41:56.075126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.539 [2024-07-15 16:41:56.075140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.539 [2024-07-15 16:41:56.075152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.539 [2024-07-15 16:41:56.075179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.539 qpair failed and we were unable to recover it. 00:25:16.539 [2024-07-15 16:41:56.085010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.539 [2024-07-15 16:41:56.085152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.540 [2024-07-15 16:41:56.085178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.540 [2024-07-15 16:41:56.085192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.540 [2024-07-15 16:41:56.085205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.540 [2024-07-15 16:41:56.085232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.540 qpair failed and we were unable to recover it. 00:25:16.540 [2024-07-15 16:41:56.095007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.540 [2024-07-15 16:41:56.095156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.540 [2024-07-15 16:41:56.095181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.540 [2024-07-15 16:41:56.095195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.540 [2024-07-15 16:41:56.095208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.540 [2024-07-15 16:41:56.095235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.540 qpair failed and we were unable to recover it. 00:25:16.540 [2024-07-15 16:41:56.105028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.540 [2024-07-15 16:41:56.105156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.540 [2024-07-15 16:41:56.105181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.540 [2024-07-15 16:41:56.105195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.540 [2024-07-15 16:41:56.105208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.540 [2024-07-15 16:41:56.105235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.540 qpair failed and we were unable to recover it. 00:25:16.540 [2024-07-15 16:41:56.115180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.540 [2024-07-15 16:41:56.115371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.540 [2024-07-15 16:41:56.115396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.540 [2024-07-15 16:41:56.115410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.540 [2024-07-15 16:41:56.115423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.540 [2024-07-15 16:41:56.115451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.540 qpair failed and we were unable to recover it. 00:25:16.540 [2024-07-15 16:41:56.125088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.540 [2024-07-15 16:41:56.125226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.540 [2024-07-15 16:41:56.125252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.540 [2024-07-15 16:41:56.125272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.540 [2024-07-15 16:41:56.125285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.540 [2024-07-15 16:41:56.125313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.540 qpair failed and we were unable to recover it. 00:25:16.799 [2024-07-15 16:41:56.135162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.799 [2024-07-15 16:41:56.135301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.799 [2024-07-15 16:41:56.135326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.799 [2024-07-15 16:41:56.135340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.799 [2024-07-15 16:41:56.135352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.799 [2024-07-15 16:41:56.135380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.799 qpair failed and we were unable to recover it. 00:25:16.799 [2024-07-15 16:41:56.145178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.799 [2024-07-15 16:41:56.145311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.799 [2024-07-15 16:41:56.145337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.799 [2024-07-15 16:41:56.145351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.799 [2024-07-15 16:41:56.145363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.799 [2024-07-15 16:41:56.145391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.799 qpair failed and we were unable to recover it. 00:25:16.799 [2024-07-15 16:41:56.155209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.799 [2024-07-15 16:41:56.155374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.799 [2024-07-15 16:41:56.155400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.799 [2024-07-15 16:41:56.155414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.799 [2024-07-15 16:41:56.155431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.799 [2024-07-15 16:41:56.155460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.799 qpair failed and we were unable to recover it. 00:25:16.799 [2024-07-15 16:41:56.165334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.800 [2024-07-15 16:41:56.165468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.800 [2024-07-15 16:41:56.165494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.800 [2024-07-15 16:41:56.165508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.800 [2024-07-15 16:41:56.165520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.800 [2024-07-15 16:41:56.165549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.800 qpair failed and we were unable to recover it. 00:25:16.800 [2024-07-15 16:41:56.175293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.800 [2024-07-15 16:41:56.175458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.800 [2024-07-15 16:41:56.175484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.800 [2024-07-15 16:41:56.175498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.800 [2024-07-15 16:41:56.175511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.800 [2024-07-15 16:41:56.175538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.800 qpair failed and we were unable to recover it. 00:25:16.800 [2024-07-15 16:41:56.185271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.800 [2024-07-15 16:41:56.185402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.800 [2024-07-15 16:41:56.185427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.800 [2024-07-15 16:41:56.185441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.800 [2024-07-15 16:41:56.185454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.800 [2024-07-15 16:41:56.185482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.800 qpair failed and we were unable to recover it. 00:25:16.800 [2024-07-15 16:41:56.195376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.800 [2024-07-15 16:41:56.195536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.800 [2024-07-15 16:41:56.195561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.800 [2024-07-15 16:41:56.195575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.800 [2024-07-15 16:41:56.195588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1628200 00:25:16.800 [2024-07-15 16:41:56.195616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:16.800 qpair failed and we were unable to recover it. 00:25:16.800 Controller properly reset. 00:25:16.800 Initializing NVMe Controllers 00:25:16.800 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:16.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:16.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:16.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:16.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:16.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:16.800 Initialization complete. Launching workers. 00:25:16.800 Starting thread on core 1 00:25:16.800 Starting thread on core 2 00:25:16.800 Starting thread on core 3 00:25:16.800 Starting thread on core 0 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:16.800 00:25:16.800 real 0m10.770s 00:25:16.800 user 0m17.751s 00:25:16.800 sys 0m5.461s 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.800 ************************************ 00:25:16.800 END TEST nvmf_target_disconnect_tc2 00:25:16.800 ************************************ 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.800 rmmod nvme_tcp 00:25:16.800 rmmod nvme_fabrics 00:25:16.800 rmmod nvme_keyring 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1622013 ']' 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1622013 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1622013 ']' 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1622013 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:16.800 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622013 00:25:17.068 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:17.068 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:17.068 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622013' 00:25:17.068 killing process with pid 1622013 00:25:17.068 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1622013 00:25:17.068 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1622013 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.328 16:41:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.234 16:41:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.234 00:25:19.234 real 0m15.546s 00:25:19.234 user 0m43.713s 00:25:19.234 sys 0m7.478s 00:25:19.234 16:41:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.234 16:41:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:19.234 ************************************ 00:25:19.234 END TEST nvmf_target_disconnect 00:25:19.234 ************************************ 00:25:19.234 16:41:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:19.234 16:41:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:19.234 16:41:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.234 16:41:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.234 16:41:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:19.234 00:25:19.234 real 19m37.424s 00:25:19.234 user 46m29.843s 00:25:19.234 sys 4m55.220s 00:25:19.234 16:41:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.234 16:41:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.234 ************************************ 00:25:19.234 END TEST nvmf_tcp 00:25:19.234 ************************************ 00:25:19.234 16:41:58 -- common/autotest_common.sh@1142 -- # return 0 00:25:19.234 16:41:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:19.234 16:41:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:19.234 16:41:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:19.234 16:41:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.234 16:41:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.234 ************************************ 00:25:19.234 START TEST spdkcli_nvmf_tcp 00:25:19.234 ************************************ 00:25:19.234 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:19.493 * Looking for test storage... 00:25:19.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.493 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1623215 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1623215 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1623215 ']' 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.494 16:41:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.494 [2024-07-15 16:41:58.930335] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:25:19.494 [2024-07-15 16:41:58.930413] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623215 ] 00:25:19.494 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.494 [2024-07-15 16:41:58.989014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:19.752 [2024-07-15 16:41:59.101903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.752 [2024-07-15 16:41:59.101914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.752 16:41:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:19.752 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:19.752 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:19.752 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:19.752 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:19.752 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:19.752 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:19.752 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.752 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.752 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:19.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:19.752 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:19.752 ' 00:25:22.344 [2024-07-15 16:42:01.774565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.719 [2024-07-15 16:42:03.006993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:26.253 [2024-07-15 16:42:05.294112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:28.154 [2024-07-15 16:42:07.264463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:29.527 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:29.527 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:29.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:29.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:29.527 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:29.527 16:42:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.785 16:42:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:29.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:29.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:29.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:29.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:29.785 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:29.785 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:29.785 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:29.785 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:29.785 ' 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:35.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:35.044 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:35.044 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:35.044 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1623215 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1623215 ']' 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1623215 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623215 00:25:35.044 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.045 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.045 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623215' 00:25:35.045 killing process with pid 1623215 00:25:35.045 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1623215 00:25:35.045 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1623215 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1623215 ']' 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1623215 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1623215 ']' 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1623215 00:25:35.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1623215) - No such process 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1623215 is not found' 00:25:35.302 Process with pid 1623215 is not found 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:35.302 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:35.303 16:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:35.303 00:25:35.303 real 0m16.044s 00:25:35.303 user 0m33.873s 00:25:35.303 sys 0m0.795s 00:25:35.303 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:35.303 16:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.303 ************************************ 00:25:35.303 END TEST spdkcli_nvmf_tcp 00:25:35.303 ************************************ 00:25:35.303 16:42:14 -- common/autotest_common.sh@1142 -- # return 0 00:25:35.303 16:42:14 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:35.303 16:42:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:35.303 16:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.303 16:42:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.562 ************************************ 00:25:35.562 START TEST nvmf_identify_passthru 00:25:35.562 ************************************ 00:25:35.562 16:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:35.562 * Looking for test storage... 00:25:35.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:35.562 16:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.562 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.562 16:42:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.562 16:42:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.562 16:42:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.562 16:42:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.563 16:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.563 16:42:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.563 16:42:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.563 16:42:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:35.563 16:42:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.563 16:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.563 16:42:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:35.563 16:42:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:35.563 16:42:14 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.563 16:42:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:37.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:37.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:37.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:37.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.505 16:42:16 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:25:37.505 00:25:37.505 --- 10.0.0.2 ping statistics --- 00:25:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.505 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:37.505 00:25:37.505 --- 10.0.0.1 ping statistics --- 00:25:37.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.505 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.505 16:42:17 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.505 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.505 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:37.505 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:37.763 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:37.763 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:37.763 16:42:17 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:37.763 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:37.763 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:37.763 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:37.763 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:37.763 16:42:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:37.763 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.944 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:41.944 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:41.944 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:41.944 16:42:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:41.944 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1628334 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.192 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1628334 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1628334 ']' 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.192 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.192 [2024-07-15 16:42:25.636216] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:25:46.192 [2024-07-15 16:42:25.636319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.192 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.192 [2024-07-15 16:42:25.702434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.450 [2024-07-15 16:42:25.814340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.450 [2024-07-15 16:42:25.814397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.450 [2024-07-15 16:42:25.814425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.450 [2024-07-15 16:42:25.814436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.450 [2024-07-15 16:42:25.814446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.450 [2024-07-15 16:42:25.815910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.450 [2024-07-15 16:42:25.815935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.450 [2024-07-15 16:42:25.816005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.450 [2024-07-15 16:42:25.816008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.450 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.450 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:46.450 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:46.450 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.450 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.450 INFO: Log level set to 20 00:25:46.450 INFO: Requests: 00:25:46.450 { 00:25:46.450 "jsonrpc": "2.0", 00:25:46.450 "method": "nvmf_set_config", 00:25:46.450 "id": 1, 00:25:46.450 "params": { 00:25:46.450 "admin_cmd_passthru": { 00:25:46.450 "identify_ctrlr": true 00:25:46.450 } 00:25:46.450 } 00:25:46.450 } 00:25:46.450 00:25:46.450 INFO: response: 00:25:46.450 { 00:25:46.450 "jsonrpc": "2.0", 00:25:46.450 "id": 1, 00:25:46.450 "result": true 00:25:46.450 } 00:25:46.450 00:25:46.450 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.450 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.451 INFO: Setting log level to 20 00:25:46.451 INFO: Setting log level to 20 00:25:46.451 INFO: Log level set to 20 00:25:46.451 INFO: Log level set to 20 00:25:46.451 INFO: Requests: 00:25:46.451 { 00:25:46.451 "jsonrpc": "2.0", 00:25:46.451 "method": "framework_start_init", 00:25:46.451 "id": 1 00:25:46.451 } 00:25:46.451 00:25:46.451 INFO: Requests: 00:25:46.451 { 00:25:46.451 "jsonrpc": "2.0", 00:25:46.451 "method": "framework_start_init", 00:25:46.451 "id": 1 00:25:46.451 } 00:25:46.451 00:25:46.451 [2024-07-15 16:42:25.968207] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:46.451 INFO: response: 00:25:46.451 { 00:25:46.451 "jsonrpc": "2.0", 00:25:46.451 "id": 1, 00:25:46.451 "result": true 00:25:46.451 } 00:25:46.451 00:25:46.451 INFO: response: 00:25:46.451 { 00:25:46.451 "jsonrpc": "2.0", 00:25:46.451 "id": 1, 00:25:46.451 "result": true 00:25:46.451 } 00:25:46.451 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.451 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.451 INFO: Setting log level to 40 00:25:46.451 INFO: Setting log level to 40 00:25:46.451 INFO: Setting log level to 40 00:25:46.451 [2024-07-15 16:42:25.978395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.451 16:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.451 16:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:46.451 16:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:46.451 16:42:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.451 16:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.739 Nvme0n1 00:25:49.739 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.739 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 [2024-07-15 16:42:28.869934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 [ 00:25:49.740 { 00:25:49.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:49.740 "subtype": "Discovery", 00:25:49.740 "listen_addresses": [], 00:25:49.740 "allow_any_host": true, 00:25:49.740 "hosts": [] 00:25:49.740 }, 00:25:49.740 { 00:25:49.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.740 "subtype": "NVMe", 00:25:49.740 "listen_addresses": [ 00:25:49.740 { 00:25:49.740 "trtype": "TCP", 00:25:49.740 "adrfam": "IPv4", 00:25:49.740 "traddr": "10.0.0.2", 00:25:49.740 "trsvcid": "4420" 00:25:49.740 } 00:25:49.740 ], 00:25:49.740 "allow_any_host": true, 00:25:49.740 "hosts": [], 00:25:49.740 "serial_number": "SPDK00000000000001", 00:25:49.740 "model_number": "SPDK bdev Controller", 00:25:49.740 "max_namespaces": 1, 00:25:49.740 "min_cntlid": 1, 00:25:49.740 "max_cntlid": 65519, 00:25:49.740 "namespaces": [ 00:25:49.740 { 00:25:49.740 "nsid": 1, 00:25:49.740 "bdev_name": "Nvme0n1", 00:25:49.740 "name": "Nvme0n1", 00:25:49.740 "nguid": "6729D966840D49CC8FC7C6D02616D218", 00:25:49.740 "uuid": "6729d966-840d-49cc-8fc7-c6d02616d218" 00:25:49.740 } 00:25:49.740 ] 00:25:49.740 } 00:25:49.740 ] 00:25:49.740 16:42:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:49.740 16:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:49.740 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:49.740 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.740 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.740 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:49.740 16:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.740 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.740 rmmod nvme_tcp 00:25:49.999 rmmod nvme_fabrics 00:25:49.999 rmmod nvme_keyring 00:25:49.999 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.999 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:49.999 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:49.999 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1628334 ']' 00:25:49.999 16:42:29 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1628334 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1628334 ']' 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1628334 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1628334 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1628334' 00:25:49.999 killing process with pid 1628334 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1628334 00:25:49.999 16:42:29 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1628334 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.905 16:42:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.905 16:42:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:51.905 16:42:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.815 16:42:33 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.815 00:25:53.815 real 0m18.120s 00:25:53.815 user 0m26.997s 00:25:53.815 sys 0m2.321s 00:25:53.815 16:42:33 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.815 16:42:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.815 ************************************ 00:25:53.815 END TEST nvmf_identify_passthru 00:25:53.815 ************************************ 00:25:53.815 16:42:33 -- common/autotest_common.sh@1142 -- # return 0 00:25:53.815 16:42:33 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:53.815 16:42:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:53.815 16:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.815 16:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:53.815 ************************************ 00:25:53.815 START TEST nvmf_dif 00:25:53.815 ************************************ 00:25:53.815 16:42:33 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:53.815 * Looking for test storage... 00:25:53.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.815 16:42:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.815 16:42:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.815 16:42:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.815 16:42:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.815 16:42:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.815 16:42:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.815 16:42:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:53.815 16:42:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:53.815 16:42:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.815 16:42:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:53.815 16:42:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.815 16:42:33 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.816 16:42:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:55.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:55.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:55.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:55.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:25:55.725 00:25:55.725 --- 10.0.0.2 ping statistics --- 00:25:55.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.725 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:25:55.725 00:25:55.725 --- 10.0.0.1 ping statistics --- 00:25:55.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.725 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:55.725 16:42:35 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:56.656 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:56.656 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:56.656 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:56.656 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:56.656 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:56.656 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:56.656 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:56.656 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:56.656 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:56.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:56.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:56.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:56.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:56.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:56.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:56.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:56.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.914 16:42:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:56.914 16:42:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1631593 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:56.914 16:42:36 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1631593 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1631593 ']' 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.914 16:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:56.914 [2024-07-15 16:42:36.468854] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:25:56.914 [2024-07-15 16:42:36.468944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.914 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.172 [2024-07-15 16:42:36.535823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.172 [2024-07-15 16:42:36.643462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.172 [2024-07-15 16:42:36.643517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.172 [2024-07-15 16:42:36.643545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.172 [2024-07-15 16:42:36.643556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.172 [2024-07-15 16:42:36.643565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.172 [2024-07-15 16:42:36.643597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.172 16:42:36 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.172 16:42:36 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:57.172 16:42:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.172 16:42:36 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.172 16:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:57.431 16:42:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.431 16:42:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:57.431 16:42:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:57.431 [2024-07-15 16:42:36.774277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.431 16:42:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.431 16:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:57.431 ************************************ 00:25:57.431 START TEST fio_dif_1_default 00:25:57.431 ************************************ 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:57.431 bdev_null0 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:57.431 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:57.432 [2024-07-15 16:42:36.830549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.432 { 00:25:57.432 "params": { 00:25:57.432 "name": "Nvme$subsystem", 00:25:57.432 "trtype": "$TEST_TRANSPORT", 00:25:57.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.432 "adrfam": "ipv4", 00:25:57.432 "trsvcid": "$NVMF_PORT", 00:25:57.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.432 "hdgst": ${hdgst:-false}, 00:25:57.432 "ddgst": ${ddgst:-false} 00:25:57.432 }, 00:25:57.432 "method": "bdev_nvme_attach_controller" 00:25:57.432 } 00:25:57.432 EOF 00:25:57.432 )") 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:57.432 "params": { 00:25:57.432 "name": "Nvme0", 00:25:57.432 "trtype": "tcp", 00:25:57.432 "traddr": "10.0.0.2", 00:25:57.432 "adrfam": "ipv4", 00:25:57.432 "trsvcid": "4420", 00:25:57.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.432 "hdgst": false, 00:25:57.432 "ddgst": false 00:25:57.432 }, 00:25:57.432 "method": "bdev_nvme_attach_controller" 00:25:57.432 }' 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:57.432 16:42:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.691 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:57.692 fio-3.35 00:25:57.692 Starting 1 thread 00:25:57.692 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.907 00:26:09.907 filename0: (groupid=0, jobs=1): err= 0: pid=1631806: Mon Jul 15 16:42:47 2024 00:26:09.907 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:26:09.907 slat (nsec): min=4317, max=43561, avg=9473.95, stdev=3038.94 00:26:09.907 clat (usec): min=741, max=45957, avg=21076.42, stdev=20138.09 00:26:09.907 lat (usec): min=749, max=45972, avg=21085.90, stdev=20137.70 00:26:09.907 clat percentiles (usec): 00:26:09.907 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 848], 00:26:09.907 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:26:09.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:09.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:26:09.907 | 99.99th=[45876] 00:26:09.907 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:26:09.907 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:26:09.907 lat (usec) : 750=0.11%, 1000=49.68% 00:26:09.907 lat (msec) : 50=50.21% 00:26:09.907 cpu : usr=90.14%, sys=9.59%, ctx=14, majf=0, minf=232 00:26:09.907 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.907 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.907 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:09.907 00:26:09.907 Run status group 0 (all jobs): 00:26:09.907 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10005-10005msec 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.907 00:26:09.907 real 0m11.066s 00:26:09.907 user 0m10.249s 00:26:09.907 sys 0m1.222s 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.907 ************************************ 00:26:09.907 END TEST fio_dif_1_default 00:26:09.907 ************************************ 00:26:09.907 16:42:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:09.907 16:42:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:09.907 16:42:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:09.907 16:42:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.907 16:42:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:09.907 ************************************ 00:26:09.907 START TEST fio_dif_1_multi_subsystems 00:26:09.907 ************************************ 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:09.907 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 bdev_null0 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 [2024-07-15 16:42:47.945856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 bdev_null1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:09.908 { 00:26:09.908 "params": { 00:26:09.908 "name": "Nvme$subsystem", 00:26:09.908 "trtype": "$TEST_TRANSPORT", 00:26:09.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.908 "adrfam": "ipv4", 00:26:09.908 "trsvcid": "$NVMF_PORT", 00:26:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.908 "hdgst": ${hdgst:-false}, 00:26:09.908 "ddgst": ${ddgst:-false} 00:26:09.908 }, 00:26:09.908 "method": "bdev_nvme_attach_controller" 00:26:09.908 } 00:26:09.908 EOF 00:26:09.908 )") 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:09.908 { 00:26:09.908 "params": { 00:26:09.908 "name": "Nvme$subsystem", 00:26:09.908 "trtype": "$TEST_TRANSPORT", 00:26:09.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.908 "adrfam": "ipv4", 00:26:09.908 "trsvcid": "$NVMF_PORT", 00:26:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.908 "hdgst": ${hdgst:-false}, 00:26:09.908 "ddgst": ${ddgst:-false} 00:26:09.908 }, 00:26:09.908 "method": "bdev_nvme_attach_controller" 00:26:09.908 } 00:26:09.908 EOF 00:26:09.908 )") 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:09.908 16:42:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:09.908 "params": { 00:26:09.908 "name": "Nvme0", 00:26:09.908 "trtype": "tcp", 00:26:09.908 "traddr": "10.0.0.2", 00:26:09.908 "adrfam": "ipv4", 00:26:09.908 "trsvcid": "4420", 00:26:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:09.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:09.908 "hdgst": false, 00:26:09.908 "ddgst": false 00:26:09.908 }, 00:26:09.908 "method": "bdev_nvme_attach_controller" 00:26:09.908 },{ 00:26:09.908 "params": { 00:26:09.908 "name": "Nvme1", 00:26:09.908 "trtype": "tcp", 00:26:09.908 "traddr": "10.0.0.2", 00:26:09.908 "adrfam": "ipv4", 00:26:09.908 "trsvcid": "4420", 00:26:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.908 "hdgst": false, 00:26:09.908 "ddgst": false 00:26:09.908 }, 00:26:09.908 "method": "bdev_nvme_attach_controller" 00:26:09.908 }' 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:09.908 16:42:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.908 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:09.908 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:09.908 fio-3.35 00:26:09.908 Starting 2 threads 00:26:09.908 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.888 00:26:19.888 filename0: (groupid=0, jobs=1): err= 0: pid=1633108: Mon Jul 15 16:42:58 2024 00:26:19.888 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10006msec) 00:26:19.888 slat (nsec): min=4750, max=74417, avg=9964.17, stdev=3200.47 00:26:19.888 clat (usec): min=796, max=45983, avg=21076.48, stdev=20126.76 00:26:19.888 lat (usec): min=806, max=45996, avg=21086.45, stdev=20126.57 00:26:19.888 clat percentiles (usec): 00:26:19.888 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 840], 20.00th=[ 857], 00:26:19.888 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:26:19.888 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:19.888 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:26:19.888 | 99.99th=[45876] 00:26:19.888 bw ( KiB/s): min= 670, max= 768, per=66.41%, avg=756.70, stdev=28.32, samples=20 00:26:19.888 iops : min= 167, max= 192, avg=189.15, stdev= 7.16, samples=20 00:26:19.888 lat (usec) : 1000=49.37% 00:26:19.888 lat (msec) : 2=0.42%, 50=50.21% 00:26:19.888 cpu : usr=93.65%, sys=5.28%, ctx=33, majf=0, minf=102 00:26:19.888 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.888 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.888 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:19.888 filename1: (groupid=0, jobs=1): err= 0: pid=1633109: Mon Jul 15 16:42:58 2024 00:26:19.888 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10036msec) 00:26:19.888 slat (nsec): min=4432, max=66402, avg=10582.18, stdev=4251.38 00:26:19.888 clat (usec): min=40915, max=46016, avg=41780.36, stdev=507.95 00:26:19.888 lat (usec): min=40923, max=46031, avg=41790.95, stdev=508.13 00:26:19.888 clat percentiles (usec): 00:26:19.888 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:19.888 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:19.888 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:19.888 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:26:19.888 | 99.99th=[45876] 00:26:19.888 bw ( KiB/s): min= 351, max= 384, per=33.56%, avg=382.35, stdev= 7.38, samples=20 00:26:19.888 iops : min= 87, max= 96, avg=95.55, stdev= 2.01, samples=20 00:26:19.888 lat (msec) : 50=100.00% 00:26:19.888 cpu : usr=94.53%, sys=5.17%, ctx=10, majf=0, minf=158 00:26:19.888 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.888 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.888 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:19.888 00:26:19.888 Run status group 0 (all jobs): 00:26:19.888 READ: bw=1138KiB/s (1166kB/s), 383KiB/s-758KiB/s (392kB/s-776kB/s), io=11.2MiB (11.7MB), run=10006-10036msec 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 00:26:19.888 real 0m11.400s 00:26:19.888 user 0m20.266s 00:26:19.888 sys 0m1.338s 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 ************************************ 00:26:19.888 END TEST fio_dif_1_multi_subsystems 00:26:19.888 ************************************ 00:26:19.888 16:42:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:19.888 16:42:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:19.888 16:42:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:19.888 16:42:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 ************************************ 00:26:19.888 START TEST fio_dif_rand_params 00:26:19.888 ************************************ 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 bdev_null0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 [2024-07-15 16:42:59.397651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:19.888 { 00:26:19.888 "params": { 00:26:19.888 "name": "Nvme$subsystem", 00:26:19.888 "trtype": "$TEST_TRANSPORT", 00:26:19.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.888 "adrfam": "ipv4", 00:26:19.888 "trsvcid": "$NVMF_PORT", 00:26:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.888 "hdgst": ${hdgst:-false}, 00:26:19.888 "ddgst": ${ddgst:-false} 00:26:19.888 }, 00:26:19.888 "method": "bdev_nvme_attach_controller" 00:26:19.888 } 00:26:19.888 EOF 00:26:19.888 )") 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:19.888 "params": { 00:26:19.888 "name": "Nvme0", 00:26:19.888 "trtype": "tcp", 00:26:19.888 "traddr": "10.0.0.2", 00:26:19.888 "adrfam": "ipv4", 00:26:19.888 "trsvcid": "4420", 00:26:19.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:19.888 "hdgst": false, 00:26:19.888 "ddgst": false 00:26:19.888 }, 00:26:19.888 "method": "bdev_nvme_attach_controller" 00:26:19.888 }' 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:19.888 16:42:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.147 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:20.147 ... 00:26:20.147 fio-3.35 00:26:20.147 Starting 3 threads 00:26:20.147 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.711 00:26:26.711 filename0: (groupid=0, jobs=1): err= 0: pid=1634507: Mon Jul 15 16:43:05 2024 00:26:26.711 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(149MiB/5045msec) 00:26:26.711 slat (nsec): min=4691, max=32744, avg=13393.30, stdev=2073.51 00:26:26.711 clat (usec): min=4973, max=88791, avg=12686.29, stdev=11329.60 00:26:26.711 lat (usec): min=4986, max=88806, avg=12699.68, stdev=11329.58 00:26:26.711 clat percentiles (usec): 00:26:26.711 | 1.00th=[ 5407], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 7898], 00:26:26.711 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9896], 00:26:26.711 | 70.00th=[11207], 80.00th=[12518], 90.00th=[14746], 95.00th=[49546], 00:26:26.711 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[88605], 00:26:26.711 | 99.99th=[88605] 00:26:26.711 bw ( KiB/s): min=22272, max=46080, per=37.98%, avg=30336.00, stdev=6327.63, samples=10 00:26:26.711 iops : min= 174, max= 360, avg=237.00, stdev=49.43, samples=10 00:26:26.711 lat (msec) : 10=60.35%, 20=31.99%, 50=3.45%, 100=4.21% 00:26:26.711 cpu : usr=92.55%, sys=7.02%, ctx=10, majf=0, minf=137 00:26:26.711 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.711 filename0: (groupid=0, jobs=1): err= 0: pid=1634508: Mon Jul 15 16:43:05 2024 00:26:26.711 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(125MiB/5006msec) 00:26:26.711 slat (nsec): min=4657, max=29683, avg=13585.00, stdev=2106.32 00:26:26.711 clat (usec): min=5201, max=94644, avg=15056.57, stdev=14426.21 00:26:26.711 lat (usec): min=5213, max=94658, avg=15070.15, stdev=14426.27 00:26:26.711 clat percentiles (usec): 00:26:26.711 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 8291], 00:26:26.711 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11076], 00:26:26.711 | 70.00th=[11994], 80.00th=[13173], 90.00th=[49021], 95.00th=[51643], 00:26:26.711 | 99.00th=[54264], 99.50th=[88605], 99.90th=[94897], 99.95th=[94897], 00:26:26.711 | 99.99th=[94897] 00:26:26.711 bw ( KiB/s): min=15616, max=36096, per=31.83%, avg=25420.80, stdev=6320.49, samples=10 00:26:26.711 iops : min= 122, max= 282, avg=198.60, stdev=49.38, samples=10 00:26:26.711 lat (msec) : 10=51.51%, 20=36.35%, 50=4.02%, 100=8.13% 00:26:26.711 cpu : usr=92.39%, sys=6.73%, ctx=30, majf=0, minf=71 00:26:26.711 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.711 filename0: (groupid=0, jobs=1): err= 0: pid=1634509: Mon Jul 15 16:43:05 2024 00:26:26.711 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(121MiB/5021msec) 00:26:26.711 slat (nsec): min=4812, max=25955, avg=13106.39, stdev=2064.99 00:26:26.711 clat (usec): min=5131, max=92359, avg=15605.82, stdev=15096.64 00:26:26.711 lat (usec): min=5144, max=92372, avg=15618.93, stdev=15096.59 00:26:26.711 clat percentiles (usec): 00:26:26.711 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 7898], 00:26:26.711 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[11076], 00:26:26.711 | 70.00th=[12256], 80.00th=[13304], 90.00th=[50594], 95.00th=[52691], 00:26:26.711 | 99.00th=[54264], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:26:26.711 | 99.99th=[92799] 00:26:26.711 bw ( KiB/s): min=18176, max=34304, per=30.81%, avg=24606.10, stdev=5593.13, samples=10 00:26:26.711 iops : min= 142, max= 268, avg=192.20, stdev=43.71, samples=10 00:26:26.711 lat (msec) : 10=52.49%, 20=33.09%, 50=3.84%, 100=10.58% 00:26:26.711 cpu : usr=92.47%, sys=6.61%, ctx=300, majf=0, minf=64 00:26:26.711 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.711 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.711 00:26:26.711 Run status group 0 (all jobs): 00:26:26.711 READ: bw=78.0MiB/s (81.8MB/s), 24.0MiB/s-29.4MiB/s (25.2MB/s-30.9MB/s), io=394MiB (413MB), run=5006-5045msec 00:26:26.711 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 bdev_null0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 [2024-07-15 16:43:05.619372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 bdev_null1 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 bdev_null2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.712 { 00:26:26.712 "params": { 00:26:26.712 "name": "Nvme$subsystem", 00:26:26.712 "trtype": "$TEST_TRANSPORT", 00:26:26.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.712 "adrfam": "ipv4", 00:26:26.712 "trsvcid": "$NVMF_PORT", 00:26:26.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.712 "hdgst": ${hdgst:-false}, 00:26:26.712 "ddgst": ${ddgst:-false} 00:26:26.712 }, 00:26:26.712 "method": "bdev_nvme_attach_controller" 00:26:26.712 } 00:26:26.712 EOF 00:26:26.712 )") 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.712 { 00:26:26.712 "params": { 00:26:26.712 "name": "Nvme$subsystem", 00:26:26.712 "trtype": "$TEST_TRANSPORT", 00:26:26.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.712 "adrfam": "ipv4", 00:26:26.712 "trsvcid": "$NVMF_PORT", 00:26:26.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.712 "hdgst": ${hdgst:-false}, 00:26:26.712 "ddgst": ${ddgst:-false} 00:26:26.712 }, 00:26:26.712 "method": "bdev_nvme_attach_controller" 00:26:26.712 } 00:26:26.712 EOF 00:26:26.712 )") 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:26.712 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:26.713 { 00:26:26.713 "params": { 00:26:26.713 "name": "Nvme$subsystem", 00:26:26.713 "trtype": "$TEST_TRANSPORT", 00:26:26.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.713 "adrfam": "ipv4", 00:26:26.713 "trsvcid": "$NVMF_PORT", 00:26:26.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.713 "hdgst": ${hdgst:-false}, 00:26:26.713 "ddgst": ${ddgst:-false} 00:26:26.713 }, 00:26:26.713 "method": "bdev_nvme_attach_controller" 00:26:26.713 } 00:26:26.713 EOF 00:26:26.713 )") 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:26.713 "params": { 00:26:26.713 "name": "Nvme0", 00:26:26.713 "trtype": "tcp", 00:26:26.713 "traddr": "10.0.0.2", 00:26:26.713 "adrfam": "ipv4", 00:26:26.713 "trsvcid": "4420", 00:26:26.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:26.713 "hdgst": false, 00:26:26.713 "ddgst": false 00:26:26.713 }, 00:26:26.713 "method": "bdev_nvme_attach_controller" 00:26:26.713 },{ 00:26:26.713 "params": { 00:26:26.713 "name": "Nvme1", 00:26:26.713 "trtype": "tcp", 00:26:26.713 "traddr": "10.0.0.2", 00:26:26.713 "adrfam": "ipv4", 00:26:26.713 "trsvcid": "4420", 00:26:26.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.713 "hdgst": false, 00:26:26.713 "ddgst": false 00:26:26.713 }, 00:26:26.713 "method": "bdev_nvme_attach_controller" 00:26:26.713 },{ 00:26:26.713 "params": { 00:26:26.713 "name": "Nvme2", 00:26:26.713 "trtype": "tcp", 00:26:26.713 "traddr": "10.0.0.2", 00:26:26.713 "adrfam": "ipv4", 00:26:26.713 "trsvcid": "4420", 00:26:26.713 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:26.713 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:26.713 "hdgst": false, 00:26:26.713 "ddgst": false 00:26:26.713 }, 00:26:26.713 "method": "bdev_nvme_attach_controller" 00:26:26.713 }' 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:26.713 16:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.713 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.713 ... 00:26:26.713 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.713 ... 00:26:26.713 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.713 ... 00:26:26.713 fio-3.35 00:26:26.713 Starting 24 threads 00:26:26.713 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.903 00:26:38.903 filename0: (groupid=0, jobs=1): err= 0: pid=1635375: Mon Jul 15 16:43:17 2024 00:26:38.903 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10040msec) 00:26:38.903 slat (usec): min=15, max=108, avg=60.79, stdev=15.00 00:26:38.903 clat (msec): min=117, max=307, avg=238.57, stdev=32.36 00:26:38.903 lat (msec): min=118, max=307, avg=238.64, stdev=32.37 00:26:38.903 clat percentiles (msec): 00:26:38.903 | 1.00th=[ 126], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 215], 00:26:38.903 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.903 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:26:38.903 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:26:38.903 | 99.99th=[ 309] 00:26:38.903 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=50.97, samples=20 00:26:38.903 iops : min= 32, max= 96, avg=65.60, stdev=12.74, samples=20 00:26:38.903 lat (msec) : 250=52.68%, 500=47.32% 00:26:38.903 cpu : usr=98.25%, sys=1.30%, ctx=14, majf=0, minf=9 00:26:38.903 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:26:38.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.903 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.903 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.903 filename0: (groupid=0, jobs=1): err= 0: pid=1635376: Mon Jul 15 16:43:17 2024 00:26:38.903 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10026msec) 00:26:38.903 slat (nsec): min=8489, max=95593, avg=49643.98, stdev=23016.85 00:26:38.903 clat (msec): min=101, max=420, avg=244.06, stdev=44.63 00:26:38.903 lat (msec): min=101, max=421, avg=244.11, stdev=44.62 00:26:38.903 clat percentiles (msec): 00:26:38.903 | 1.00th=[ 110], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 224], 00:26:38.903 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:26:38.903 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 317], 00:26:38.903 | 99.00th=[ 384], 99.50th=[ 397], 99.90th=[ 422], 99.95th=[ 422], 00:26:38.903 | 99.99th=[ 422] 00:26:38.903 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=256.00, stdev=55.43, samples=20 00:26:38.903 iops : min= 32, max= 96, avg=64.00, stdev=13.86, samples=20 00:26:38.903 lat (msec) : 250=55.49%, 500=44.51% 00:26:38.903 cpu : usr=97.29%, sys=1.92%, ctx=55, majf=0, minf=9 00:26:38.903 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:38.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.903 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.903 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.903 filename0: (groupid=0, jobs=1): err= 0: pid=1635377: Mon Jul 15 16:43:17 2024 00:26:38.903 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10030msec) 00:26:38.903 slat (usec): min=9, max=147, avg=39.62, stdev=16.48 00:26:38.903 clat (msec): min=140, max=330, avg=244.21, stdev=28.03 00:26:38.903 lat (msec): min=140, max=330, avg=244.25, stdev=28.03 00:26:38.903 clat percentiles (msec): 00:26:38.903 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 205], 20.00th=[ 234], 00:26:38.903 | 30.00th=[ 236], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:26:38.903 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 275], 00:26:38.903 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 330], 00:26:38.903 | 99.99th=[ 330] 00:26:38.903 bw ( KiB/s): min= 144, max= 384, per=3.70%, avg=256.00, stdev=39.19, samples=20 00:26:38.903 iops : min= 36, max= 96, avg=64.00, stdev= 9.80, samples=20 00:26:38.903 lat (msec) : 250=50.30%, 500=49.70% 00:26:38.903 cpu : usr=97.31%, sys=1.76%, ctx=40, majf=0, minf=9 00:26:38.904 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename0: (groupid=0, jobs=1): err= 0: pid=1635378: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10027msec) 00:26:38.904 slat (usec): min=17, max=150, avg=61.45, stdev=11.24 00:26:38.904 clat (msec): min=69, max=400, avg=244.06, stdev=43.51 00:26:38.904 lat (msec): min=69, max=401, avg=244.12, stdev=43.50 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 70], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 230], 00:26:38.904 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.904 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 284], 00:26:38.904 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:26:38.904 | 99.99th=[ 401] 00:26:38.904 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=256.00, stdev=41.53, samples=20 00:26:38.904 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:26:38.904 lat (msec) : 100=2.44%, 250=49.70%, 500=47.87% 00:26:38.904 cpu : usr=95.89%, sys=2.30%, ctx=121, majf=0, minf=9 00:26:38.904 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename0: (groupid=0, jobs=1): err= 0: pid=1635379: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10044msec) 00:26:38.904 slat (usec): min=9, max=135, avg=30.43, stdev=10.81 00:26:38.904 clat (msec): min=117, max=313, avg=238.80, stdev=34.39 00:26:38.904 lat (msec): min=118, max=313, avg=238.83, stdev=34.39 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 127], 5.00th=[ 161], 10.00th=[ 190], 20.00th=[ 215], 00:26:38.904 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.904 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.904 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:26:38.904 | 99.99th=[ 313] 00:26:38.904 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=48.53, samples=20 00:26:38.904 iops : min= 32, max= 96, avg=65.60, stdev=12.13, samples=20 00:26:38.904 lat (msec) : 250=52.08%, 500=47.92% 00:26:38.904 cpu : usr=97.89%, sys=1.72%, ctx=12, majf=0, minf=9 00:26:38.904 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename0: (groupid=0, jobs=1): err= 0: pid=1635380: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10044msec) 00:26:38.904 slat (usec): min=10, max=161, avg=41.78, stdev=15.76 00:26:38.904 clat (msec): min=125, max=307, avg=238.71, stdev=32.22 00:26:38.904 lat (msec): min=125, max=307, avg=238.75, stdev=32.23 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 127], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 215], 00:26:38.904 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.904 | 70.00th=[ 257], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 275], 00:26:38.904 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:26:38.904 | 99.99th=[ 309] 00:26:38.904 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=50.70, samples=20 00:26:38.904 iops : min= 32, max= 96, avg=65.60, stdev=12.68, samples=20 00:26:38.904 lat (msec) : 250=52.38%, 500=47.62% 00:26:38.904 cpu : usr=96.39%, sys=2.16%, ctx=63, majf=0, minf=9 00:26:38.904 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename0: (groupid=0, jobs=1): err= 0: pid=1635381: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=69, BW=280KiB/s (287kB/s)(2816KiB/10059msec) 00:26:38.904 slat (nsec): min=3982, max=92618, avg=60893.56, stdev=15290.34 00:26:38.904 clat (msec): min=22, max=344, avg=228.11, stdev=57.16 00:26:38.904 lat (msec): min=22, max=344, avg=228.17, stdev=57.17 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 23], 5.00th=[ 101], 10.00th=[ 178], 20.00th=[ 199], 00:26:38.904 | 30.00th=[ 230], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 251], 00:26:38.904 | 70.00th=[ 257], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 284], 00:26:38.904 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:26:38.904 | 99.99th=[ 347] 00:26:38.904 bw ( KiB/s): min= 256, max= 512, per=3.99%, avg=275.20, stdev=62.64, samples=20 00:26:38.904 iops : min= 64, max= 128, avg=68.80, stdev=15.66, samples=20 00:26:38.904 lat (msec) : 50=1.99%, 100=2.84%, 250=51.99%, 500=43.18% 00:26:38.904 cpu : usr=98.20%, sys=1.37%, ctx=12, majf=0, minf=9 00:26:38.904 IO depths : 1=3.1%, 2=9.2%, 4=24.6%, 8=53.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename0: (groupid=0, jobs=1): err= 0: pid=1635382: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=94, BW=378KiB/s (387kB/s)(3808KiB/10063msec) 00:26:38.904 slat (nsec): min=4612, max=82200, avg=13844.45, stdev=9673.64 00:26:38.904 clat (msec): min=25, max=324, avg=168.79, stdev=43.82 00:26:38.904 lat (msec): min=25, max=324, avg=168.80, stdev=43.82 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 26], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 140], 00:26:38.904 | 30.00th=[ 148], 40.00th=[ 155], 50.00th=[ 180], 60.00th=[ 186], 00:26:38.904 | 70.00th=[ 190], 80.00th=[ 197], 90.00th=[ 207], 95.00th=[ 245], 00:26:38.904 | 99.00th=[ 275], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:26:38.904 | 99.99th=[ 326] 00:26:38.904 bw ( KiB/s): min= 256, max= 512, per=5.43%, avg=374.40, stdev=52.53, samples=20 00:26:38.904 iops : min= 64, max= 128, avg=93.60, stdev=13.13, samples=20 00:26:38.904 lat (msec) : 50=1.68%, 100=0.84%, 250=92.65%, 500=4.83% 00:26:38.904 cpu : usr=98.08%, sys=1.53%, ctx=23, majf=0, minf=9 00:26:38.904 IO depths : 1=1.2%, 2=4.3%, 4=15.2%, 8=67.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename1: (groupid=0, jobs=1): err= 0: pid=1635383: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=83, BW=334KiB/s (342kB/s)(3360KiB/10062msec) 00:26:38.904 slat (usec): min=7, max=238, avg=27.54, stdev=24.95 00:26:38.904 clat (msec): min=25, max=307, avg=191.40, stdev=48.82 00:26:38.904 lat (msec): min=25, max=307, avg=191.42, stdev=48.82 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 26], 5.00th=[ 113], 10.00th=[ 126], 20.00th=[ 157], 00:26:38.904 | 30.00th=[ 165], 40.00th=[ 190], 50.00th=[ 197], 60.00th=[ 205], 00:26:38.904 | 70.00th=[ 220], 80.00th=[ 234], 90.00th=[ 243], 95.00th=[ 253], 00:26:38.904 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:26:38.904 | 99.99th=[ 309] 00:26:38.904 bw ( KiB/s): min= 256, max= 512, per=4.77%, avg=329.60, stdev=73.30, samples=20 00:26:38.904 iops : min= 64, max= 128, avg=82.40, stdev=18.33, samples=20 00:26:38.904 lat (msec) : 50=1.90%, 250=90.00%, 500=8.10% 00:26:38.904 cpu : usr=97.52%, sys=1.66%, ctx=41, majf=0, minf=9 00:26:38.904 IO depths : 1=2.0%, 2=5.7%, 4=17.1%, 8=64.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename1: (groupid=0, jobs=1): err= 0: pid=1635384: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10024msec) 00:26:38.904 slat (usec): min=5, max=109, avg=38.01, stdev=14.95 00:26:38.904 clat (msec): min=185, max=324, avg=244.16, stdev=28.37 00:26:38.904 lat (msec): min=185, max=324, avg=244.20, stdev=28.37 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 230], 00:26:38.904 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 255], 00:26:38.904 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.904 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 326], 99.95th=[ 326], 00:26:38.904 | 99.99th=[ 326] 00:26:38.904 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=256.00, stdev=58.96, samples=20 00:26:38.904 iops : min= 32, max= 96, avg=64.00, stdev=14.74, samples=20 00:26:38.904 lat (msec) : 250=49.70%, 500=50.30% 00:26:38.904 cpu : usr=97.80%, sys=1.69%, ctx=32, majf=0, minf=9 00:26:38.904 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:38.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.904 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.904 filename1: (groupid=0, jobs=1): err= 0: pid=1635385: Mon Jul 15 16:43:17 2024 00:26:38.904 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10040msec) 00:26:38.904 slat (usec): min=12, max=120, avg=31.14, stdev=20.09 00:26:38.904 clat (msec): min=125, max=306, avg=238.73, stdev=30.97 00:26:38.904 lat (msec): min=125, max=306, avg=238.76, stdev=30.97 00:26:38.904 clat percentiles (msec): 00:26:38.904 | 1.00th=[ 126], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 234], 00:26:38.904 | 30.00th=[ 236], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.904 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 268], 95.00th=[ 275], 00:26:38.904 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:26:38.904 | 99.99th=[ 309] 00:26:38.904 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=50.44, samples=20 00:26:38.904 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:38.904 lat (msec) : 250=52.68%, 500=47.32% 00:26:38.905 cpu : usr=96.52%, sys=2.32%, ctx=65, majf=0, minf=9 00:26:38.905 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename1: (groupid=0, jobs=1): err= 0: pid=1635386: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=79, BW=317KiB/s (325kB/s)(3184KiB/10044msec) 00:26:38.905 slat (usec): min=7, max=105, avg=31.23, stdev=24.25 00:26:38.905 clat (msec): min=112, max=323, avg=201.39, stdev=43.31 00:26:38.905 lat (msec): min=112, max=323, avg=201.42, stdev=43.32 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 114], 5.00th=[ 124], 10.00th=[ 148], 20.00th=[ 161], 00:26:38.905 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 205], 60.00th=[ 211], 00:26:38.905 | 70.00th=[ 234], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 268], 00:26:38.905 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:26:38.905 | 99.99th=[ 326] 00:26:38.905 bw ( KiB/s): min= 256, max= 384, per=4.53%, avg=312.00, stdev=59.07, samples=20 00:26:38.905 iops : min= 64, max= 96, avg=78.00, stdev=14.77, samples=20 00:26:38.905 lat (msec) : 250=87.44%, 500=12.56% 00:26:38.905 cpu : usr=97.78%, sys=1.78%, ctx=15, majf=0, minf=9 00:26:38.905 IO depths : 1=2.1%, 2=6.4%, 4=19.0%, 8=62.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename1: (groupid=0, jobs=1): err= 0: pid=1635387: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10034msec) 00:26:38.905 slat (nsec): min=5769, max=73609, avg=35451.19, stdev=12711.51 00:26:38.905 clat (msec): min=138, max=328, avg=244.42, stdev=29.98 00:26:38.905 lat (msec): min=138, max=329, avg=244.46, stdev=29.98 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 150], 5.00th=[ 190], 10.00th=[ 205], 20.00th=[ 234], 00:26:38.905 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 255], 00:26:38.905 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.905 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:26:38.905 | 99.99th=[ 330] 00:26:38.905 bw ( KiB/s): min= 127, max= 384, per=3.70%, avg=255.95, stdev=57.46, samples=20 00:26:38.905 iops : min= 31, max= 96, avg=63.95, stdev=14.45, samples=20 00:26:38.905 lat (msec) : 250=49.70%, 500=50.30% 00:26:38.905 cpu : usr=97.93%, sys=1.71%, ctx=17, majf=0, minf=9 00:26:38.905 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename1: (groupid=0, jobs=1): err= 0: pid=1635388: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10044msec) 00:26:38.905 slat (nsec): min=8662, max=92135, avg=40058.14, stdev=14633.49 00:26:38.905 clat (msec): min=126, max=330, avg=238.72, stdev=35.05 00:26:38.905 lat (msec): min=126, max=330, avg=238.76, stdev=35.05 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 127], 5.00th=[ 161], 10.00th=[ 190], 20.00th=[ 211], 00:26:38.905 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.905 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.905 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 330], 00:26:38.905 | 99.99th=[ 330] 00:26:38.905 bw ( KiB/s): min= 144, max= 384, per=3.80%, avg=262.40, stdev=46.55, samples=20 00:26:38.905 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:26:38.905 lat (msec) : 250=53.72%, 500=46.28% 00:26:38.905 cpu : usr=98.04%, sys=1.47%, ctx=21, majf=0, minf=9 00:26:38.905 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename1: (groupid=0, jobs=1): err= 0: pid=1635389: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=102, BW=408KiB/s (418kB/s)(4112KiB/10067msec) 00:26:38.905 slat (usec): min=5, max=150, avg=14.02, stdev=13.06 00:26:38.905 clat (usec): min=1807, max=282447, avg=156513.39, stdev=54579.00 00:26:38.905 lat (usec): min=1816, max=282468, avg=156527.41, stdev=54577.89 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 109], 20.00th=[ 112], 00:26:38.905 | 30.00th=[ 140], 40.00th=[ 163], 50.00th=[ 171], 60.00th=[ 182], 00:26:38.905 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 209], 95.00th=[ 215], 00:26:38.905 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:26:38.905 | 99.99th=[ 284] 00:26:38.905 bw ( KiB/s): min= 256, max= 1056, per=5.86%, avg=404.80, stdev=160.01, samples=20 00:26:38.905 iops : min= 64, max= 264, avg=101.20, stdev=40.00, samples=20 00:26:38.905 lat (msec) : 2=0.78%, 4=1.26%, 10=3.02%, 20=1.56%, 100=1.36% 00:26:38.905 lat (msec) : 250=89.88%, 500=2.14% 00:26:38.905 cpu : usr=97.49%, sys=1.67%, ctx=30, majf=0, minf=9 00:26:38.905 IO depths : 1=1.5%, 2=5.7%, 4=18.7%, 8=62.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename1: (groupid=0, jobs=1): err= 0: pid=1635390: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10029msec) 00:26:38.905 slat (nsec): min=13613, max=98375, avg=64793.19, stdev=11238.93 00:26:38.905 clat (msec): min=189, max=320, avg=244.02, stdev=26.69 00:26:38.905 lat (msec): min=189, max=320, avg=244.08, stdev=26.69 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 190], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 232], 00:26:38.905 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.905 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 279], 00:26:38.905 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:26:38.905 | 99.99th=[ 321] 00:26:38.905 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=256.00, stdev=58.73, samples=20 00:26:38.905 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:26:38.905 lat (msec) : 250=58.38%, 500=41.62% 00:26:38.905 cpu : usr=98.21%, sys=1.30%, ctx=29, majf=0, minf=9 00:26:38.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename2: (groupid=0, jobs=1): err= 0: pid=1635391: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10041msec) 00:26:38.905 slat (usec): min=9, max=161, avg=42.88, stdev=20.03 00:26:38.905 clat (msec): min=126, max=273, avg=238.66, stdev=30.72 00:26:38.905 lat (msec): min=126, max=273, avg=238.70, stdev=30.72 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 127], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 234], 00:26:38.905 | 30.00th=[ 236], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.905 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 268], 95.00th=[ 271], 00:26:38.905 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:26:38.905 | 99.99th=[ 275] 00:26:38.905 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=50.44, samples=20 00:26:38.905 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:38.905 lat (msec) : 250=52.38%, 500=47.62% 00:26:38.905 cpu : usr=96.89%, sys=2.09%, ctx=48, majf=0, minf=9 00:26:38.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename2: (groupid=0, jobs=1): err= 0: pid=1635392: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10028msec) 00:26:38.905 slat (nsec): min=8080, max=96787, avg=61799.21, stdev=17555.97 00:26:38.905 clat (msec): min=71, max=418, avg=244.08, stdev=52.17 00:26:38.905 lat (msec): min=71, max=418, avg=244.14, stdev=52.17 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 109], 5.00th=[ 124], 10.00th=[ 190], 20.00th=[ 224], 00:26:38.905 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.905 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 292], 95.00th=[ 363], 00:26:38.905 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:26:38.905 | 99.99th=[ 418] 00:26:38.905 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=256.00, stdev=58.73, samples=20 00:26:38.905 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:26:38.905 lat (msec) : 100=0.30%, 250=57.32%, 500=42.38% 00:26:38.905 cpu : usr=97.20%, sys=1.85%, ctx=18, majf=0, minf=9 00:26:38.905 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:38.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.905 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.905 filename2: (groupid=0, jobs=1): err= 0: pid=1635393: Mon Jul 15 16:43:17 2024 00:26:38.905 read: IOPS=66, BW=268KiB/s (274kB/s)(2688KiB/10040msec) 00:26:38.905 slat (usec): min=9, max=109, avg=43.37, stdev=21.19 00:26:38.905 clat (msec): min=125, max=273, avg=238.65, stdev=30.92 00:26:38.905 lat (msec): min=125, max=273, avg=238.69, stdev=30.91 00:26:38.905 clat percentiles (msec): 00:26:38.905 | 1.00th=[ 127], 5.00th=[ 167], 10.00th=[ 190], 20.00th=[ 234], 00:26:38.905 | 30.00th=[ 234], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:26:38.905 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 268], 95.00th=[ 271], 00:26:38.905 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:26:38.905 | 99.99th=[ 275] 00:26:38.905 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=262.40, stdev=48.53, samples=20 00:26:38.905 iops : min= 32, max= 96, avg=65.60, stdev=12.13, samples=20 00:26:38.905 lat (msec) : 250=52.38%, 500=47.62% 00:26:38.906 cpu : usr=97.07%, sys=1.97%, ctx=57, majf=0, minf=9 00:26:38.906 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 filename2: (groupid=0, jobs=1): err= 0: pid=1635394: Mon Jul 15 16:43:17 2024 00:26:38.906 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10024msec) 00:26:38.906 slat (usec): min=8, max=146, avg=34.54, stdev=15.69 00:26:38.906 clat (msec): min=100, max=401, avg=244.17, stdev=30.31 00:26:38.906 lat (msec): min=100, max=401, avg=244.21, stdev=30.31 00:26:38.906 clat percentiles (msec): 00:26:38.906 | 1.00th=[ 167], 5.00th=[ 190], 10.00th=[ 197], 20.00th=[ 234], 00:26:38.906 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 255], 00:26:38.906 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.906 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 401], 99.95th=[ 401], 00:26:38.906 | 99.99th=[ 401] 00:26:38.906 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=256.00, stdev=58.73, samples=20 00:26:38.906 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:26:38.906 lat (msec) : 250=48.78%, 500=51.22% 00:26:38.906 cpu : usr=96.96%, sys=1.98%, ctx=63, majf=0, minf=9 00:26:38.906 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 filename2: (groupid=0, jobs=1): err= 0: pid=1635395: Mon Jul 15 16:43:17 2024 00:26:38.906 read: IOPS=63, BW=255KiB/s (262kB/s)(2560KiB/10024msec) 00:26:38.906 slat (usec): min=7, max=166, avg=64.37, stdev=21.41 00:26:38.906 clat (msec): min=188, max=459, avg=250.02, stdev=40.23 00:26:38.906 lat (msec): min=189, max=459, avg=250.08, stdev=40.22 00:26:38.906 clat percentiles (msec): 00:26:38.906 | 1.00th=[ 190], 5.00th=[ 190], 10.00th=[ 207], 20.00th=[ 234], 00:26:38.906 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 255], 00:26:38.906 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:26:38.906 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:26:38.906 | 99.99th=[ 460] 00:26:38.906 bw ( KiB/s): min= 128, max= 384, per=3.61%, avg=249.60, stdev=48.53, samples=20 00:26:38.906 iops : min= 32, max= 96, avg=62.40, stdev=12.13, samples=20 00:26:38.906 lat (msec) : 250=49.53%, 500=50.47% 00:26:38.906 cpu : usr=96.40%, sys=2.16%, ctx=108, majf=0, minf=9 00:26:38.906 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 filename2: (groupid=0, jobs=1): err= 0: pid=1635396: Mon Jul 15 16:43:17 2024 00:26:38.906 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10032msec) 00:26:38.906 slat (usec): min=11, max=102, avg=63.93, stdev=17.64 00:26:38.906 clat (msec): min=71, max=417, avg=244.14, stdev=52.21 00:26:38.906 lat (msec): min=71, max=417, avg=244.21, stdev=52.22 00:26:38.906 clat percentiles (msec): 00:26:38.906 | 1.00th=[ 110], 5.00th=[ 125], 10.00th=[ 190], 20.00th=[ 224], 00:26:38.906 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:26:38.906 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 363], 00:26:38.906 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:26:38.906 | 99.99th=[ 418] 00:26:38.906 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=256.00, stdev=58.73, samples=20 00:26:38.906 iops : min= 32, max= 96, avg=64.00, stdev=14.68, samples=20 00:26:38.906 lat (msec) : 100=0.30%, 250=57.32%, 500=42.38% 00:26:38.906 cpu : usr=96.25%, sys=2.13%, ctx=86, majf=0, minf=9 00:26:38.906 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 filename2: (groupid=0, jobs=1): err= 0: pid=1635397: Mon Jul 15 16:43:17 2024 00:26:38.906 read: IOPS=82, BW=329KiB/s (337kB/s)(3312KiB/10062msec) 00:26:38.906 slat (usec): min=5, max=175, avg=45.78, stdev=26.77 00:26:38.906 clat (msec): min=26, max=329, avg=193.94, stdev=51.16 00:26:38.906 lat (msec): min=26, max=330, avg=193.98, stdev=51.16 00:26:38.906 clat percentiles (msec): 00:26:38.906 | 1.00th=[ 27], 5.00th=[ 111], 10.00th=[ 117], 20.00th=[ 157], 00:26:38.906 | 30.00th=[ 167], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 213], 00:26:38.906 | 70.00th=[ 232], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 253], 00:26:38.906 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 330], 99.95th=[ 330], 00:26:38.906 | 99.99th=[ 330] 00:26:38.906 bw ( KiB/s): min= 256, max= 512, per=4.70%, avg=324.80, stdev=74.88, samples=20 00:26:38.906 iops : min= 64, max= 128, avg=81.20, stdev=18.72, samples=20 00:26:38.906 lat (msec) : 50=1.93%, 250=89.86%, 500=8.21% 00:26:38.906 cpu : usr=97.37%, sys=1.68%, ctx=214, majf=0, minf=9 00:26:38.906 IO depths : 1=4.0%, 2=8.6%, 4=19.9%, 8=58.9%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=92.6%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 filename2: (groupid=0, jobs=1): err= 0: pid=1635398: Mon Jul 15 16:43:17 2024 00:26:38.906 read: IOPS=94, BW=379KiB/s (388kB/s)(3816KiB/10062msec) 00:26:38.906 slat (usec): min=8, max=136, avg=15.73, stdev=15.80 00:26:38.906 clat (msec): min=24, max=327, avg=168.29, stdev=42.79 00:26:38.906 lat (msec): min=24, max=327, avg=168.30, stdev=42.79 00:26:38.906 clat percentiles (msec): 00:26:38.906 | 1.00th=[ 25], 5.00th=[ 102], 10.00th=[ 113], 20.00th=[ 144], 00:26:38.906 | 30.00th=[ 148], 40.00th=[ 163], 50.00th=[ 171], 60.00th=[ 186], 00:26:38.906 | 70.00th=[ 192], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 230], 00:26:38.906 | 99.00th=[ 259], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:26:38.906 | 99.99th=[ 330] 00:26:38.906 bw ( KiB/s): min= 256, max= 512, per=5.44%, avg=375.20, stdev=66.17, samples=20 00:26:38.906 iops : min= 64, max= 128, avg=93.80, stdev=16.54, samples=20 00:26:38.906 lat (msec) : 50=1.68%, 100=1.89%, 250=93.71%, 500=2.73% 00:26:38.906 cpu : usr=96.47%, sys=2.27%, ctx=183, majf=0, minf=9 00:26:38.906 IO depths : 1=1.6%, 2=4.0%, 4=13.2%, 8=70.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:38.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.906 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:38.906 00:26:38.906 Run status group 0 (all jobs): 00:26:38.906 READ: bw=6894KiB/s (7059kB/s), 255KiB/s-408KiB/s (262kB/s-418kB/s), io=67.8MiB (71.1MB), run=10024-10067msec 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:38.906 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 bdev_null0 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 [2024-07-15 16:43:17.533976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 bdev_null1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.907 { 00:26:38.907 "params": { 00:26:38.907 "name": "Nvme$subsystem", 00:26:38.907 "trtype": "$TEST_TRANSPORT", 00:26:38.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.907 "adrfam": "ipv4", 00:26:38.907 "trsvcid": "$NVMF_PORT", 00:26:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.907 "hdgst": ${hdgst:-false}, 00:26:38.907 "ddgst": ${ddgst:-false} 00:26:38.907 }, 00:26:38.907 "method": "bdev_nvme_attach_controller" 00:26:38.907 } 00:26:38.907 EOF 00:26:38.907 )") 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.907 { 00:26:38.907 "params": { 00:26:38.907 "name": "Nvme$subsystem", 00:26:38.907 "trtype": "$TEST_TRANSPORT", 00:26:38.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.907 "adrfam": "ipv4", 00:26:38.907 "trsvcid": "$NVMF_PORT", 00:26:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.907 "hdgst": ${hdgst:-false}, 00:26:38.907 "ddgst": ${ddgst:-false} 00:26:38.907 }, 00:26:38.907 "method": "bdev_nvme_attach_controller" 00:26:38.907 } 00:26:38.907 EOF 00:26:38.907 )") 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.907 "params": { 00:26:38.907 "name": "Nvme0", 00:26:38.907 "trtype": "tcp", 00:26:38.907 "traddr": "10.0.0.2", 00:26:38.907 "adrfam": "ipv4", 00:26:38.907 "trsvcid": "4420", 00:26:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.907 "hdgst": false, 00:26:38.907 "ddgst": false 00:26:38.907 }, 00:26:38.907 "method": "bdev_nvme_attach_controller" 00:26:38.907 },{ 00:26:38.907 "params": { 00:26:38.907 "name": "Nvme1", 00:26:38.907 "trtype": "tcp", 00:26:38.907 "traddr": "10.0.0.2", 00:26:38.907 "adrfam": "ipv4", 00:26:38.907 "trsvcid": "4420", 00:26:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.907 "hdgst": false, 00:26:38.907 "ddgst": false 00:26:38.907 }, 00:26:38.907 "method": "bdev_nvme_attach_controller" 00:26:38.907 }' 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:38.907 16:43:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.907 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:38.907 ... 00:26:38.907 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:38.907 ... 00:26:38.907 fio-3.35 00:26:38.908 Starting 4 threads 00:26:38.908 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.169 00:26:44.169 filename0: (groupid=0, jobs=1): err= 0: pid=1636893: Mon Jul 15 16:43:23 2024 00:26:44.169 read: IOPS=1889, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5003msec) 00:26:44.169 slat (nsec): min=5248, max=64081, avg=12146.91, stdev=6366.22 00:26:44.169 clat (usec): min=1699, max=8165, avg=4196.12, stdev=767.96 00:26:44.169 lat (usec): min=1712, max=8174, avg=4208.27, stdev=767.17 00:26:44.169 clat percentiles (usec): 00:26:44.169 | 1.00th=[ 2540], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3720], 00:26:44.169 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:26:44.169 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 5407], 95.00th=[ 5932], 00:26:44.169 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 7439], 00:26:44.169 | 99.99th=[ 8160] 00:26:44.169 bw ( KiB/s): min=14448, max=15904, per=25.46%, avg=15115.20, stdev=410.43, samples=10 00:26:44.169 iops : min= 1806, max= 1988, avg=1889.40, stdev=51.30, samples=10 00:26:44.169 lat (msec) : 2=0.01%, 4=43.44%, 10=56.55% 00:26:44.169 cpu : usr=95.38%, sys=4.18%, ctx=9, majf=0, minf=26 00:26:44.169 IO depths : 1=0.1%, 2=2.7%, 4=69.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.169 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.169 issued rwts: total=9455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.169 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:44.169 filename0: (groupid=0, jobs=1): err= 0: pid=1636894: Mon Jul 15 16:43:23 2024 00:26:44.169 read: IOPS=1834, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5001msec) 00:26:44.169 slat (nsec): min=6922, max=55222, avg=12112.44, stdev=6369.62 00:26:44.169 clat (usec): min=1400, max=7526, avg=4322.86, stdev=780.13 00:26:44.169 lat (usec): min=1408, max=7534, avg=4334.97, stdev=779.23 00:26:44.169 clat percentiles (usec): 00:26:44.170 | 1.00th=[ 2540], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3818], 00:26:44.170 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:26:44.170 | 70.00th=[ 4359], 80.00th=[ 4752], 90.00th=[ 5604], 95.00th=[ 5997], 00:26:44.170 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7046], 99.95th=[ 7111], 00:26:44.170 | 99.99th=[ 7504] 00:26:44.170 bw ( KiB/s): min=14192, max=15552, per=24.85%, avg=14753.56, stdev=444.67, samples=9 00:26:44.170 iops : min= 1774, max= 1944, avg=1844.11, stdev=55.51, samples=9 00:26:44.170 lat (msec) : 2=0.09%, 4=34.68%, 10=65.24% 00:26:44.170 cpu : usr=95.20%, sys=4.36%, ctx=9, majf=0, minf=30 00:26:44.170 IO depths : 1=0.2%, 2=3.0%, 4=68.7%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 issued rwts: total=9176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:44.170 filename1: (groupid=0, jobs=1): err= 0: pid=1636895: Mon Jul 15 16:43:23 2024 00:26:44.170 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(74.4MiB/5043msec) 00:26:44.170 slat (nsec): min=5168, max=70044, avg=13709.96, stdev=7472.76 00:26:44.170 clat (usec): min=1258, max=44696, avg=4171.41, stdev=1495.87 00:26:44.170 lat (usec): min=1271, max=44712, avg=4185.12, stdev=1495.71 00:26:44.170 clat percentiles (usec): 00:26:44.170 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3752], 00:26:44.170 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:26:44.170 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5407], 00:26:44.170 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[43254], 99.95th=[44827], 00:26:44.170 | 99.99th=[44827] 00:26:44.170 bw ( KiB/s): min=13995, max=16544, per=25.65%, avg=15225.10, stdev=727.65, samples=10 00:26:44.170 iops : min= 1749, max= 2068, avg=1903.10, stdev=91.03, samples=10 00:26:44.170 lat (msec) : 2=0.04%, 4=43.67%, 10=56.18%, 50=0.12% 00:26:44.170 cpu : usr=91.81%, sys=6.05%, ctx=198, majf=0, minf=76 00:26:44.170 IO depths : 1=0.1%, 2=3.1%, 4=70.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 issued rwts: total=9522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:44.170 filename1: (groupid=0, jobs=1): err= 0: pid=1636896: Mon Jul 15 16:43:23 2024 00:26:44.170 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5001msec) 00:26:44.170 slat (nsec): min=7072, max=60897, avg=11978.53, stdev=5883.08 00:26:44.170 clat (usec): min=1340, max=46649, avg=4277.76, stdev=1381.78 00:26:44.170 lat (usec): min=1360, max=46677, avg=4289.73, stdev=1381.84 00:26:44.170 clat percentiles (usec): 00:26:44.170 | 1.00th=[ 3195], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 3851], 00:26:44.170 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4228], 00:26:44.170 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5604], 00:26:44.170 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[46400], 00:26:44.170 | 99.99th=[46400] 00:26:44.170 bw ( KiB/s): min=13237, max=15696, per=24.82%, avg=14734.78, stdev=773.07, samples=9 00:26:44.170 iops : min= 1654, max= 1962, avg=1841.78, stdev=96.79, samples=9 00:26:44.170 lat (msec) : 2=0.04%, 4=35.04%, 10=64.83%, 50=0.09% 00:26:44.170 cpu : usr=94.96%, sys=4.42%, ctx=21, majf=0, minf=32 00:26:44.170 IO depths : 1=0.1%, 2=5.3%, 4=67.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.170 issued rwts: total=9267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:44.170 00:26:44.170 Run status group 0 (all jobs): 00:26:44.170 READ: bw=58.0MiB/s (60.8MB/s), 14.3MiB/s-14.8MiB/s (15.0MB/s-15.5MB/s), io=292MiB (307MB), run=5001-5043msec 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.428 00:26:44.428 real 0m24.607s 00:26:44.428 user 4m32.263s 00:26:44.428 sys 0m7.330s 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 ************************************ 00:26:44.428 END TEST fio_dif_rand_params 00:26:44.428 ************************************ 00:26:44.428 16:43:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:44.428 16:43:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:44.428 16:43:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:44.428 16:43:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.428 16:43:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:44.428 ************************************ 00:26:44.428 START TEST fio_dif_digest 00:26:44.428 ************************************ 00:26:44.428 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:44.428 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.687 bdev_null0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.687 [2024-07-15 16:43:24.058001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:44.687 { 00:26:44.687 "params": { 00:26:44.687 "name": "Nvme$subsystem", 00:26:44.687 "trtype": "$TEST_TRANSPORT", 00:26:44.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.687 "adrfam": "ipv4", 00:26:44.687 "trsvcid": "$NVMF_PORT", 00:26:44.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.687 "hdgst": ${hdgst:-false}, 00:26:44.687 "ddgst": ${ddgst:-false} 00:26:44.687 }, 00:26:44.687 "method": "bdev_nvme_attach_controller" 00:26:44.687 } 00:26:44.687 EOF 00:26:44.687 )") 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:44.687 "params": { 00:26:44.687 "name": "Nvme0", 00:26:44.687 "trtype": "tcp", 00:26:44.687 "traddr": "10.0.0.2", 00:26:44.687 "adrfam": "ipv4", 00:26:44.687 "trsvcid": "4420", 00:26:44.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:44.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:44.687 "hdgst": true, 00:26:44.687 "ddgst": true 00:26:44.687 }, 00:26:44.687 "method": "bdev_nvme_attach_controller" 00:26:44.687 }' 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:44.687 16:43:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.979 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:44.979 ... 00:26:44.979 fio-3.35 00:26:44.979 Starting 3 threads 00:26:44.979 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.171 00:26:57.171 filename0: (groupid=0, jobs=1): err= 0: pid=1637658: Mon Jul 15 16:43:34 2024 00:26:57.171 read: IOPS=201, BW=25.1MiB/s (26.3MB/s)(251MiB/10004msec) 00:26:57.171 slat (nsec): min=6788, max=62833, avg=15607.90, stdev=5577.76 00:26:57.171 clat (usec): min=8704, max=24576, avg=14904.43, stdev=1688.98 00:26:57.171 lat (usec): min=8718, max=24608, avg=14920.04, stdev=1688.02 00:26:57.171 clat percentiles (usec): 00:26:57.171 | 1.00th=[10028], 5.00th=[12125], 10.00th=[13042], 20.00th=[13698], 00:26:57.171 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:26:57.171 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:26:57.171 | 99.00th=[18744], 99.50th=[19792], 99.90th=[24511], 99.95th=[24511], 00:26:57.171 | 99.99th=[24511] 00:26:57.171 bw ( KiB/s): min=21034, max=27904, per=34.00%, avg=25709.85, stdev=1598.86, samples=20 00:26:57.171 iops : min= 164, max= 218, avg=200.80, stdev=12.51, samples=20 00:26:57.171 lat (msec) : 10=0.75%, 20=98.76%, 50=0.50% 00:26:57.171 cpu : usr=92.10%, sys=7.02%, ctx=146, majf=0, minf=109 00:26:57.171 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.171 filename0: (groupid=0, jobs=1): err= 0: pid=1637659: Mon Jul 15 16:43:34 2024 00:26:57.171 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(245MiB/10047msec) 00:26:57.171 slat (nsec): min=6035, max=50855, avg=14017.19, stdev=3359.38 00:26:57.171 clat (usec): min=8490, max=58510, avg=15368.11, stdev=4373.52 00:26:57.171 lat (usec): min=8503, max=58522, avg=15382.13, stdev=4373.32 00:26:57.171 clat percentiles (usec): 00:26:57.171 | 1.00th=[11469], 5.00th=[12780], 10.00th=[13304], 20.00th=[13829], 00:26:57.171 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:26:57.171 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16909], 95.00th=[17433], 00:26:57.171 | 99.00th=[47973], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:26:57.171 | 99.99th=[58459] 00:26:57.171 bw ( KiB/s): min=20521, max=27648, per=33.06%, avg=25005.20, stdev=1877.68, samples=20 00:26:57.171 iops : min= 160, max= 216, avg=195.30, stdev=14.73, samples=20 00:26:57.171 lat (msec) : 10=0.36%, 20=98.52%, 50=0.15%, 100=0.97% 00:26:57.171 cpu : usr=92.11%, sys=7.43%, ctx=34, majf=0, minf=138 00:26:57.171 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.171 filename0: (groupid=0, jobs=1): err= 0: pid=1637660: Mon Jul 15 16:43:34 2024 00:26:57.171 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10005msec) 00:26:57.171 slat (nsec): min=7785, max=49576, avg=14005.41, stdev=3290.12 00:26:57.171 clat (usec): min=8264, max=59153, avg=15222.99, stdev=3225.63 00:26:57.171 lat (usec): min=8278, max=59171, avg=15236.99, stdev=3225.54 00:26:57.171 clat percentiles (usec): 00:26:57.171 | 1.00th=[10552], 5.00th=[12780], 10.00th=[13304], 20.00th=[13960], 00:26:57.171 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:26:57.171 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16712], 95.00th=[17433], 00:26:57.171 | 99.00th=[19006], 99.50th=[19792], 99.90th=[58983], 99.95th=[58983], 00:26:57.171 | 99.99th=[58983] 00:26:57.171 bw ( KiB/s): min=21760, max=27392, per=33.31%, avg=25190.40, stdev=1277.03, samples=20 00:26:57.171 iops : min= 170, max= 214, avg=196.80, stdev= 9.98, samples=20 00:26:57.171 lat (msec) : 10=0.61%, 20=98.93%, 100=0.46% 00:26:57.171 cpu : usr=92.60%, sys=6.95%, ctx=20, majf=0, minf=116 00:26:57.171 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.171 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.171 00:26:57.171 Run status group 0 (all jobs): 00:26:57.171 READ: bw=73.9MiB/s (77.4MB/s), 24.3MiB/s-25.1MiB/s (25.5MB/s-26.3MB/s), io=742MiB (778MB), run=10004-10047msec 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.171 00:26:57.171 real 0m11.142s 00:26:57.171 user 0m28.793s 00:26:57.171 sys 0m2.418s 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.171 16:43:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:57.171 ************************************ 00:26:57.171 END TEST fio_dif_digest 00:26:57.171 ************************************ 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:57.171 16:43:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:57.171 16:43:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:57.171 rmmod nvme_tcp 00:26:57.171 rmmod nvme_fabrics 00:26:57.171 rmmod nvme_keyring 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1631593 ']' 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1631593 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1631593 ']' 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1631593 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631593 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631593' 00:26:57.171 killing process with pid 1631593 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1631593 00:26:57.171 16:43:35 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1631593 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:57.171 16:43:35 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.171 Waiting for block devices as requested 00:26:57.171 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:57.431 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:57.431 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:57.690 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:57.690 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:57.690 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:57.690 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:57.690 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:57.949 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:57.949 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:57.949 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:57.949 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:58.208 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:58.208 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:58.208 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:58.466 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:58.466 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:58.466 16:43:38 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.466 16:43:38 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.466 16:43:38 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.466 16:43:38 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.466 16:43:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.466 16:43:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:58.466 16:43:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.001 16:43:40 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.001 00:27:01.001 real 1m6.959s 00:27:01.001 user 6m29.047s 00:27:01.001 sys 0m18.739s 00:27:01.001 16:43:40 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.001 16:43:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:01.001 ************************************ 00:27:01.001 END TEST nvmf_dif 00:27:01.001 ************************************ 00:27:01.001 16:43:40 -- common/autotest_common.sh@1142 -- # return 0 00:27:01.001 16:43:40 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:01.001 16:43:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.001 16:43:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.001 16:43:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.001 ************************************ 00:27:01.001 START TEST nvmf_abort_qd_sizes 00:27:01.001 ************************************ 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:01.001 * Looking for test storage... 00:27:01.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.001 16:43:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.380 16:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.640 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.640 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.640 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:02.640 00:27:02.640 --- 10.0.0.2 ping statistics --- 00:27:02.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.640 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:02.640 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:27:02.640 00:27:02.641 --- 10.0.0.1 ping statistics --- 00:27:02.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.641 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:27:02.641 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.641 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:02.641 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:02.641 16:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:03.588 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:03.588 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:03.588 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:03.846 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:04.808 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1642435 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1642435 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1642435 ']' 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.808 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:04.808 [2024-07-15 16:43:44.284677] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:04.808 [2024-07-15 16:43:44.284747] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.808 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.808 [2024-07-15 16:43:44.352750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.065 [2024-07-15 16:43:44.473336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.065 [2024-07-15 16:43:44.473403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.065 [2024-07-15 16:43:44.473420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.065 [2024-07-15 16:43:44.473433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.065 [2024-07-15 16:43:44.473445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.065 [2024-07-15 16:43:44.473533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.065 [2024-07-15 16:43:44.473604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.065 [2024-07-15 16:43:44.473627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.065 [2024-07-15 16:43:44.473631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:05.065 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.066 16:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:05.323 ************************************ 00:27:05.323 START TEST spdk_target_abort 00:27:05.323 ************************************ 00:27:05.323 16:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:05.323 16:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:05.323 16:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:05.323 16:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.323 16:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.604 spdk_targetn1 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.604 [2024-07-15 16:43:47.505064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.604 [2024-07-15 16:43:47.537367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:08.604 16:43:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.604 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.169 Initializing NVMe Controllers 00:27:11.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:11.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:11.169 Initialization complete. Launching workers. 00:27:11.169 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10996, failed: 0 00:27:11.169 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1272, failed to submit 9724 00:27:11.169 success 801, unsuccess 471, failed 0 00:27:11.169 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.169 16:43:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.169 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.454 Initializing NVMe Controllers 00:27:14.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:14.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:14.454 Initialization complete. Launching workers. 00:27:14.454 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8635, failed: 0 00:27:14.454 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7403 00:27:14.454 success 320, unsuccess 912, failed 0 00:27:14.454 16:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:14.454 16:43:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.735 Initializing NVMe Controllers 00:27:17.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:17.735 Initialization complete. Launching workers. 00:27:17.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31403, failed: 0 00:27:17.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2735, failed to submit 28668 00:27:17.735 success 548, unsuccess 2187, failed 0 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.735 16:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1642435 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1642435 ']' 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1642435 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1642435 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1642435' 00:27:19.113 killing process with pid 1642435 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1642435 00:27:19.113 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1642435 00:27:19.370 00:27:19.370 real 0m14.143s 00:27:19.370 user 0m52.475s 00:27:19.371 sys 0m2.978s 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:19.371 ************************************ 00:27:19.371 END TEST spdk_target_abort 00:27:19.371 ************************************ 00:27:19.371 16:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:19.371 16:43:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:19.371 16:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.371 16:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.371 16:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:19.371 ************************************ 00:27:19.371 START TEST kernel_target_abort 00:27:19.371 ************************************ 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:19.371 16:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:20.304 Waiting for block devices as requested 00:27:20.304 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:20.563 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:20.563 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:20.821 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:20.821 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:20.821 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:20.821 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:21.079 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:21.079 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:21.079 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:21.079 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:21.338 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:21.338 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:21.338 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:21.338 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:21.595 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:21.595 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:21.595 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:21.852 No valid GPT data, bailing 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:21.852 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:21.852 00:27:21.852 Discovery Log Number of Records 2, Generation counter 2 00:27:21.852 =====Discovery Log Entry 0====== 00:27:21.852 trtype: tcp 00:27:21.852 adrfam: ipv4 00:27:21.852 subtype: current discovery subsystem 00:27:21.852 treq: not specified, sq flow control disable supported 00:27:21.852 portid: 1 00:27:21.852 trsvcid: 4420 00:27:21.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:21.852 traddr: 10.0.0.1 00:27:21.852 eflags: none 00:27:21.852 sectype: none 00:27:21.852 =====Discovery Log Entry 1====== 00:27:21.852 trtype: tcp 00:27:21.852 adrfam: ipv4 00:27:21.852 subtype: nvme subsystem 00:27:21.852 treq: not specified, sq flow control disable supported 00:27:21.852 portid: 1 00:27:21.852 trsvcid: 4420 00:27:21.853 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:21.853 traddr: 10.0.0.1 00:27:21.853 eflags: none 00:27:21.853 sectype: none 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:21.853 16:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.853 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.130 Initializing NVMe Controllers 00:27:25.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:25.130 Initialization complete. Launching workers. 00:27:25.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29988, failed: 0 00:27:25.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29988, failed to submit 0 00:27:25.130 success 0, unsuccess 29988, failed 0 00:27:25.130 16:44:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:25.130 16:44:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.406 Initializing NVMe Controllers 00:27:28.406 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:28.406 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:28.406 Initialization complete. Launching workers. 00:27:28.406 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63723, failed: 0 00:27:28.406 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16070, failed to submit 47653 00:27:28.406 success 0, unsuccess 16070, failed 0 00:27:28.406 16:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:28.406 16:44:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:28.406 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.753 Initializing NVMe Controllers 00:27:31.753 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:31.753 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:31.753 Initialization complete. Launching workers. 00:27:31.753 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59489, failed: 0 00:27:31.753 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14858, failed to submit 44631 00:27:31.753 success 0, unsuccess 14858, failed 0 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:31.753 16:44:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:32.319 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:32.319 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:32.319 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:32.319 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:32.319 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:32.319 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:32.319 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:32.577 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:32.577 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:32.577 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:33.512 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:33.512 00:27:33.512 real 0m14.108s 00:27:33.512 user 0m4.853s 00:27:33.512 sys 0m3.331s 00:27:33.512 16:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.512 16:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.512 ************************************ 00:27:33.512 END TEST kernel_target_abort 00:27:33.512 ************************************ 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.512 16:44:12 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.512 rmmod nvme_tcp 00:27:33.512 rmmod nvme_fabrics 00:27:33.512 rmmod nvme_keyring 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1642435 ']' 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1642435 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1642435 ']' 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1642435 00:27:33.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1642435) - No such process 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1642435 is not found' 00:27:33.512 Process with pid 1642435 is not found 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:33.512 16:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:34.443 Waiting for block devices as requested 00:27:34.700 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:34.700 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:34.957 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:34.957 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:34.957 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:34.957 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:35.215 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:35.215 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:35.215 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:35.215 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:35.475 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:35.475 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:35.475 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:35.475 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:35.733 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:35.733 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:35.733 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:35.991 16:44:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.892 16:44:17 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.892 00:27:37.892 real 0m37.331s 00:27:37.892 user 0m59.389s 00:27:37.892 sys 0m9.372s 00:27:37.892 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.892 16:44:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:37.892 ************************************ 00:27:37.892 END TEST nvmf_abort_qd_sizes 00:27:37.892 ************************************ 00:27:37.892 16:44:17 -- common/autotest_common.sh@1142 -- # return 0 00:27:37.892 16:44:17 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:37.892 16:44:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:37.892 16:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.892 16:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.892 ************************************ 00:27:37.892 START TEST keyring_file 00:27:37.892 ************************************ 00:27:37.893 16:44:17 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:38.151 * Looking for test storage... 00:27:38.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:38.151 16:44:17 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:38.151 16:44:17 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.151 16:44:17 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.151 16:44:17 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.151 16:44:17 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.151 16:44:17 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.151 16:44:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.151 16:44:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.152 16:44:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.152 16:44:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:38.152 16:44:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bIU8BDLLA6 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bIU8BDLLA6 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bIU8BDLLA6 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bIU8BDLLA6 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.amfaFbnpFg 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:38.152 16:44:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.amfaFbnpFg 00:27:38.152 16:44:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.amfaFbnpFg 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.amfaFbnpFg 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=1648200 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:38.152 16:44:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1648200 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1648200 ']' 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.152 16:44:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:38.152 [2024-07-15 16:44:17.673240] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:38.152 [2024-07-15 16:44:17.673337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648200 ] 00:27:38.152 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.152 [2024-07-15 16:44:17.729981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.410 [2024-07-15 16:44:17.844788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:39.344 16:44:18 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.344 [2024-07-15 16:44:18.605495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.344 null0 00:27:39.344 [2024-07-15 16:44:18.637541] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:39.344 [2024-07-15 16:44:18.637960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:39.344 [2024-07-15 16:44:18.645558] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.344 16:44:18 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.344 16:44:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 [2024-07-15 16:44:18.653560] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:39.345 request: 00:27:39.345 { 00:27:39.345 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.345 "secure_channel": false, 00:27:39.345 "listen_address": { 00:27:39.345 "trtype": "tcp", 00:27:39.345 "traddr": "127.0.0.1", 00:27:39.345 "trsvcid": "4420" 00:27:39.345 }, 00:27:39.345 "method": "nvmf_subsystem_add_listener", 00:27:39.345 "req_id": 1 00:27:39.345 } 00:27:39.345 Got JSON-RPC error response 00:27:39.345 response: 00:27:39.345 { 00:27:39.345 "code": -32602, 00:27:39.345 "message": "Invalid parameters" 00:27:39.345 } 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.345 16:44:18 keyring_file -- keyring/file.sh@46 -- # bperfpid=1648337 00:27:39.345 16:44:18 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:39.345 16:44:18 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1648337 /var/tmp/bperf.sock 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1648337 ']' 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.345 16:44:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 [2024-07-15 16:44:18.701713] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:39.345 [2024-07-15 16:44:18.701788] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648337 ] 00:27:39.345 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.345 [2024-07-15 16:44:18.763540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.345 [2024-07-15 16:44:18.879563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.603 16:44:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.603 16:44:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:39.603 16:44:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:39.603 16:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:39.860 16:44:19 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.amfaFbnpFg 00:27:39.860 16:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.amfaFbnpFg 00:27:40.117 16:44:19 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:40.117 16:44:19 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:40.117 16:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.117 16:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:40.117 16:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.375 16:44:19 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bIU8BDLLA6 == \/\t\m\p\/\t\m\p\.\b\I\U\8\B\D\L\L\A\6 ]] 00:27:40.375 16:44:19 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:40.375 16:44:19 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:40.375 16:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.375 16:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.375 16:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:40.632 16:44:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.amfaFbnpFg == \/\t\m\p\/\t\m\p\.\a\m\f\a\F\b\n\p\F\g ]] 00:27:40.632 16:44:19 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:40.632 16:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:40.632 16:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.632 16:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.632 16:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.632 16:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:40.891 16:44:20 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:40.891 16:44:20 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:40.891 16:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:40.891 16:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.891 16:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.891 16:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.891 16:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:41.148 16:44:20 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:41.148 16:44:20 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.148 16:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.148 [2024-07-15 16:44:20.722517] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.406 nvme0n1 00:27:41.406 16:44:20 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:41.406 16:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.406 16:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.406 16:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.406 16:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.406 16:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.664 16:44:21 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:41.664 16:44:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:41.664 16:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:41.665 16:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.665 16:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.665 16:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.665 16:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:41.923 16:44:21 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:41.923 16:44:21 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.923 Running I/O for 1 seconds... 00:27:42.856 00:27:42.856 Latency(us) 00:27:42.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.856 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:42.856 nvme0n1 : 1.02 4400.78 17.19 0.00 0.00 28746.28 6456.51 33981.63 00:27:42.856 =================================================================================================================== 00:27:42.856 Total : 4400.78 17.19 0.00 0.00 28746.28 6456.51 33981.63 00:27:42.856 0 00:27:42.856 16:44:22 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:43.113 16:44:22 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.113 16:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.371 16:44:22 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:43.371 16:44:22 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:43.371 16:44:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:43.371 16:44:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.371 16:44:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.371 16:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.371 16:44:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:43.629 16:44:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:43.629 16:44:23 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:43.629 16:44:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:43.629 16:44:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:43.629 16:44:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:43.629 16:44:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.630 16:44:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:43.630 16:44:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:43.630 16:44:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:43.630 16:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:43.886 [2024-07-15 16:44:23.458381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:43.886 [2024-07-15 16:44:23.458691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b9a0 (107): Transport endpoint is not connected 00:27:43.886 [2024-07-15 16:44:23.459681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b9a0 (9): Bad file descriptor 00:27:43.886 [2024-07-15 16:44:23.460680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:43.886 [2024-07-15 16:44:23.460702] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:43.886 [2024-07-15 16:44:23.460717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:43.886 request: 00:27:43.886 { 00:27:43.886 "name": "nvme0", 00:27:43.886 "trtype": "tcp", 00:27:43.886 "traddr": "127.0.0.1", 00:27:43.886 "adrfam": "ipv4", 00:27:43.886 "trsvcid": "4420", 00:27:43.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.886 "prchk_reftag": false, 00:27:43.886 "prchk_guard": false, 00:27:43.886 "hdgst": false, 00:27:43.886 "ddgst": false, 00:27:43.886 "psk": "key1", 00:27:43.886 "method": "bdev_nvme_attach_controller", 00:27:43.886 "req_id": 1 00:27:43.886 } 00:27:43.886 Got JSON-RPC error response 00:27:43.886 response: 00:27:43.886 { 00:27:43.886 "code": -5, 00:27:43.886 "message": "Input/output error" 00:27:43.886 } 00:27:43.886 16:44:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:43.886 16:44:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:43.886 16:44:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:43.886 16:44:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:43.886 16:44:23 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:43.886 16:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:43.886 16:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:44.143 16:44:23 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:44.143 16:44:23 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.143 16:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:44.400 16:44:23 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:44.400 16:44:23 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:44.400 16:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:44.681 16:44:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:44.681 16:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:44.953 16:44:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:44.953 16:44:24 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:44.953 16:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.211 16:44:24 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:45.211 16:44:24 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bIU8BDLLA6 00:27:45.211 16:44:24 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.211 16:44:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.211 16:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.468 [2024-07-15 16:44:24.947573] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bIU8BDLLA6': 0100660 00:27:45.468 [2024-07-15 16:44:24.947611] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:45.468 request: 00:27:45.468 { 00:27:45.468 "name": "key0", 00:27:45.468 "path": "/tmp/tmp.bIU8BDLLA6", 00:27:45.468 "method": "keyring_file_add_key", 00:27:45.468 "req_id": 1 00:27:45.468 } 00:27:45.469 Got JSON-RPC error response 00:27:45.469 response: 00:27:45.469 { 00:27:45.469 "code": -1, 00:27:45.469 "message": "Operation not permitted" 00:27:45.469 } 00:27:45.469 16:44:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:45.469 16:44:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:45.469 16:44:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:45.469 16:44:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:45.469 16:44:24 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bIU8BDLLA6 00:27:45.469 16:44:24 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.469 16:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bIU8BDLLA6 00:27:45.726 16:44:25 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bIU8BDLLA6 00:27:45.726 16:44:25 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:45.726 16:44:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:45.726 16:44:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.726 16:44:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.726 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.726 16:44:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:45.984 16:44:25 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:45.984 16:44:25 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:45.984 16:44:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.984 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:46.241 [2024-07-15 16:44:25.713662] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bIU8BDLLA6': No such file or directory 00:27:46.242 [2024-07-15 16:44:25.713699] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:46.242 [2024-07-15 16:44:25.713731] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:46.242 [2024-07-15 16:44:25.713743] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:46.242 [2024-07-15 16:44:25.713756] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:46.242 request: 00:27:46.242 { 00:27:46.242 "name": "nvme0", 00:27:46.242 "trtype": "tcp", 00:27:46.242 "traddr": "127.0.0.1", 00:27:46.242 "adrfam": "ipv4", 00:27:46.242 "trsvcid": "4420", 00:27:46.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:46.242 "prchk_reftag": false, 00:27:46.242 "prchk_guard": false, 00:27:46.242 "hdgst": false, 00:27:46.242 "ddgst": false, 00:27:46.242 "psk": "key0", 00:27:46.242 "method": "bdev_nvme_attach_controller", 00:27:46.242 "req_id": 1 00:27:46.242 } 00:27:46.242 Got JSON-RPC error response 00:27:46.242 response: 00:27:46.242 { 00:27:46.242 "code": -19, 00:27:46.242 "message": "No such device" 00:27:46.242 } 00:27:46.242 16:44:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:46.242 16:44:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.242 16:44:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.242 16:44:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.242 16:44:25 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:46.242 16:44:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:46.500 16:44:25 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HIJQzgaISI 00:27:46.500 16:44:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:46.500 16:44:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:46.500 16:44:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HIJQzgaISI 00:27:46.500 16:44:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HIJQzgaISI 00:27:46.500 16:44:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HIJQzgaISI 00:27:46.500 16:44:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HIJQzgaISI 00:27:46.500 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HIJQzgaISI 00:27:46.757 16:44:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:46.758 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:47.015 nvme0n1 00:27:47.015 16:44:26 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:47.015 16:44:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:47.015 16:44:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.015 16:44:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.016 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.016 16:44:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:47.273 16:44:26 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:47.273 16:44:26 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:47.273 16:44:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:47.531 16:44:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:47.531 16:44:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:47.531 16:44:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.531 16:44:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.531 16:44:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:47.788 16:44:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:47.788 16:44:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:47.788 16:44:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:47.788 16:44:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.788 16:44:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.788 16:44:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.788 16:44:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:48.045 16:44:27 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:48.045 16:44:27 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:48.045 16:44:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:48.302 16:44:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:48.302 16:44:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.302 16:44:27 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:48.559 16:44:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:48.559 16:44:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HIJQzgaISI 00:27:48.559 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HIJQzgaISI 00:27:48.816 16:44:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.amfaFbnpFg 00:27:48.816 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.amfaFbnpFg 00:27:49.074 16:44:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.074 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.331 nvme0n1 00:27:49.331 16:44:28 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:49.331 16:44:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:49.897 16:44:29 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:49.897 "subsystems": [ 00:27:49.897 { 00:27:49.897 "subsystem": "keyring", 00:27:49.897 "config": [ 00:27:49.897 { 00:27:49.897 "method": "keyring_file_add_key", 00:27:49.897 "params": { 00:27:49.897 "name": "key0", 00:27:49.897 "path": "/tmp/tmp.HIJQzgaISI" 00:27:49.897 } 00:27:49.897 }, 00:27:49.897 { 00:27:49.897 "method": "keyring_file_add_key", 00:27:49.897 "params": { 00:27:49.897 "name": "key1", 00:27:49.897 "path": "/tmp/tmp.amfaFbnpFg" 00:27:49.897 } 00:27:49.897 } 00:27:49.897 ] 00:27:49.897 }, 00:27:49.897 { 00:27:49.897 "subsystem": "iobuf", 00:27:49.897 "config": [ 00:27:49.897 { 00:27:49.897 "method": "iobuf_set_options", 00:27:49.897 "params": { 00:27:49.897 "small_pool_count": 8192, 00:27:49.897 "large_pool_count": 1024, 00:27:49.897 "small_bufsize": 8192, 00:27:49.897 "large_bufsize": 135168 00:27:49.897 } 00:27:49.897 } 00:27:49.897 ] 00:27:49.897 }, 00:27:49.897 { 00:27:49.897 "subsystem": "sock", 00:27:49.897 "config": [ 00:27:49.897 { 00:27:49.897 "method": "sock_set_default_impl", 00:27:49.897 "params": { 00:27:49.897 "impl_name": "posix" 00:27:49.897 } 00:27:49.897 }, 00:27:49.897 { 00:27:49.897 "method": "sock_impl_set_options", 00:27:49.897 "params": { 00:27:49.897 "impl_name": "ssl", 00:27:49.897 "recv_buf_size": 4096, 00:27:49.897 "send_buf_size": 4096, 00:27:49.897 "enable_recv_pipe": true, 00:27:49.897 "enable_quickack": false, 00:27:49.897 "enable_placement_id": 0, 00:27:49.897 "enable_zerocopy_send_server": true, 00:27:49.897 "enable_zerocopy_send_client": false, 00:27:49.897 "zerocopy_threshold": 0, 00:27:49.897 "tls_version": 0, 00:27:49.897 "enable_ktls": false 00:27:49.897 } 00:27:49.897 }, 00:27:49.897 { 00:27:49.897 "method": "sock_impl_set_options", 00:27:49.897 "params": { 00:27:49.897 "impl_name": "posix", 00:27:49.897 "recv_buf_size": 2097152, 00:27:49.897 "send_buf_size": 2097152, 00:27:49.897 "enable_recv_pipe": true, 00:27:49.897 "enable_quickack": false, 00:27:49.897 "enable_placement_id": 0, 00:27:49.897 "enable_zerocopy_send_server": true, 00:27:49.897 "enable_zerocopy_send_client": false, 00:27:49.897 "zerocopy_threshold": 0, 00:27:49.897 "tls_version": 0, 00:27:49.897 "enable_ktls": false 00:27:49.898 } 00:27:49.898 } 00:27:49.898 ] 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "subsystem": "vmd", 00:27:49.898 "config": [] 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "subsystem": "accel", 00:27:49.898 "config": [ 00:27:49.898 { 00:27:49.898 "method": "accel_set_options", 00:27:49.898 "params": { 00:27:49.898 "small_cache_size": 128, 00:27:49.898 "large_cache_size": 16, 00:27:49.898 "task_count": 2048, 00:27:49.898 "sequence_count": 2048, 00:27:49.898 "buf_count": 2048 00:27:49.898 } 00:27:49.898 } 00:27:49.898 ] 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "subsystem": "bdev", 00:27:49.898 "config": [ 00:27:49.898 { 00:27:49.898 "method": "bdev_set_options", 00:27:49.898 "params": { 00:27:49.898 "bdev_io_pool_size": 65535, 00:27:49.898 "bdev_io_cache_size": 256, 00:27:49.898 "bdev_auto_examine": true, 00:27:49.898 "iobuf_small_cache_size": 128, 00:27:49.898 "iobuf_large_cache_size": 16 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_raid_set_options", 00:27:49.898 "params": { 00:27:49.898 "process_window_size_kb": 1024 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_iscsi_set_options", 00:27:49.898 "params": { 00:27:49.898 "timeout_sec": 30 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_nvme_set_options", 00:27:49.898 "params": { 00:27:49.898 "action_on_timeout": "none", 00:27:49.898 "timeout_us": 0, 00:27:49.898 "timeout_admin_us": 0, 00:27:49.898 "keep_alive_timeout_ms": 10000, 00:27:49.898 "arbitration_burst": 0, 00:27:49.898 "low_priority_weight": 0, 00:27:49.898 "medium_priority_weight": 0, 00:27:49.898 "high_priority_weight": 0, 00:27:49.898 "nvme_adminq_poll_period_us": 10000, 00:27:49.898 "nvme_ioq_poll_period_us": 0, 00:27:49.898 "io_queue_requests": 512, 00:27:49.898 "delay_cmd_submit": true, 00:27:49.898 "transport_retry_count": 4, 00:27:49.898 "bdev_retry_count": 3, 00:27:49.898 "transport_ack_timeout": 0, 00:27:49.898 "ctrlr_loss_timeout_sec": 0, 00:27:49.898 "reconnect_delay_sec": 0, 00:27:49.898 "fast_io_fail_timeout_sec": 0, 00:27:49.898 "disable_auto_failback": false, 00:27:49.898 "generate_uuids": false, 00:27:49.898 "transport_tos": 0, 00:27:49.898 "nvme_error_stat": false, 00:27:49.898 "rdma_srq_size": 0, 00:27:49.898 "io_path_stat": false, 00:27:49.898 "allow_accel_sequence": false, 00:27:49.898 "rdma_max_cq_size": 0, 00:27:49.898 "rdma_cm_event_timeout_ms": 0, 00:27:49.898 "dhchap_digests": [ 00:27:49.898 "sha256", 00:27:49.898 "sha384", 00:27:49.898 "sha512" 00:27:49.898 ], 00:27:49.898 "dhchap_dhgroups": [ 00:27:49.898 "null", 00:27:49.898 "ffdhe2048", 00:27:49.898 "ffdhe3072", 00:27:49.898 "ffdhe4096", 00:27:49.898 "ffdhe6144", 00:27:49.898 "ffdhe8192" 00:27:49.898 ] 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_nvme_attach_controller", 00:27:49.898 "params": { 00:27:49.898 "name": "nvme0", 00:27:49.898 "trtype": "TCP", 00:27:49.898 "adrfam": "IPv4", 00:27:49.898 "traddr": "127.0.0.1", 00:27:49.898 "trsvcid": "4420", 00:27:49.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.898 "prchk_reftag": false, 00:27:49.898 "prchk_guard": false, 00:27:49.898 "ctrlr_loss_timeout_sec": 0, 00:27:49.898 "reconnect_delay_sec": 0, 00:27:49.898 "fast_io_fail_timeout_sec": 0, 00:27:49.898 "psk": "key0", 00:27:49.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:49.898 "hdgst": false, 00:27:49.898 "ddgst": false 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_nvme_set_hotplug", 00:27:49.898 "params": { 00:27:49.898 "period_us": 100000, 00:27:49.898 "enable": false 00:27:49.898 } 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "method": "bdev_wait_for_examine" 00:27:49.898 } 00:27:49.898 ] 00:27:49.898 }, 00:27:49.898 { 00:27:49.898 "subsystem": "nbd", 00:27:49.898 "config": [] 00:27:49.898 } 00:27:49.898 ] 00:27:49.898 }' 00:27:49.898 16:44:29 keyring_file -- keyring/file.sh@114 -- # killprocess 1648337 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1648337 ']' 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1648337 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1648337 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1648337' 00:27:49.898 killing process with pid 1648337 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@967 -- # kill 1648337 00:27:49.898 Received shutdown signal, test time was about 1.000000 seconds 00:27:49.898 00:27:49.898 Latency(us) 00:27:49.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.898 =================================================================================================================== 00:27:49.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.898 16:44:29 keyring_file -- common/autotest_common.sh@972 -- # wait 1648337 00:27:50.157 16:44:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=1649682 00:27:50.157 16:44:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1649682 /var/tmp/bperf.sock 00:27:50.157 16:44:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1649682 ']' 00:27:50.157 16:44:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:50.157 16:44:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:50.157 16:44:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.157 16:44:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:50.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:50.157 16:44:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:50.157 "subsystems": [ 00:27:50.157 { 00:27:50.157 "subsystem": "keyring", 00:27:50.157 "config": [ 00:27:50.158 { 00:27:50.158 "method": "keyring_file_add_key", 00:27:50.158 "params": { 00:27:50.158 "name": "key0", 00:27:50.158 "path": "/tmp/tmp.HIJQzgaISI" 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "keyring_file_add_key", 00:27:50.158 "params": { 00:27:50.158 "name": "key1", 00:27:50.158 "path": "/tmp/tmp.amfaFbnpFg" 00:27:50.158 } 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "iobuf", 00:27:50.158 "config": [ 00:27:50.158 { 00:27:50.158 "method": "iobuf_set_options", 00:27:50.158 "params": { 00:27:50.158 "small_pool_count": 8192, 00:27:50.158 "large_pool_count": 1024, 00:27:50.158 "small_bufsize": 8192, 00:27:50.158 "large_bufsize": 135168 00:27:50.158 } 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "sock", 00:27:50.158 "config": [ 00:27:50.158 { 00:27:50.158 "method": "sock_set_default_impl", 00:27:50.158 "params": { 00:27:50.158 "impl_name": "posix" 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "sock_impl_set_options", 00:27:50.158 "params": { 00:27:50.158 "impl_name": "ssl", 00:27:50.158 "recv_buf_size": 4096, 00:27:50.158 "send_buf_size": 4096, 00:27:50.158 "enable_recv_pipe": true, 00:27:50.158 "enable_quickack": false, 00:27:50.158 "enable_placement_id": 0, 00:27:50.158 "enable_zerocopy_send_server": true, 00:27:50.158 "enable_zerocopy_send_client": false, 00:27:50.158 "zerocopy_threshold": 0, 00:27:50.158 "tls_version": 0, 00:27:50.158 "enable_ktls": false 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "sock_impl_set_options", 00:27:50.158 "params": { 00:27:50.158 "impl_name": "posix", 00:27:50.158 "recv_buf_size": 2097152, 00:27:50.158 "send_buf_size": 2097152, 00:27:50.158 "enable_recv_pipe": true, 00:27:50.158 "enable_quickack": false, 00:27:50.158 "enable_placement_id": 0, 00:27:50.158 "enable_zerocopy_send_server": true, 00:27:50.158 "enable_zerocopy_send_client": false, 00:27:50.158 "zerocopy_threshold": 0, 00:27:50.158 "tls_version": 0, 00:27:50.158 "enable_ktls": false 00:27:50.158 } 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "vmd", 00:27:50.158 "config": [] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "accel", 00:27:50.158 "config": [ 00:27:50.158 { 00:27:50.158 "method": "accel_set_options", 00:27:50.158 "params": { 00:27:50.158 "small_cache_size": 128, 00:27:50.158 "large_cache_size": 16, 00:27:50.158 "task_count": 2048, 00:27:50.158 "sequence_count": 2048, 00:27:50.158 "buf_count": 2048 00:27:50.158 } 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "bdev", 00:27:50.158 "config": [ 00:27:50.158 { 00:27:50.158 "method": "bdev_set_options", 00:27:50.158 "params": { 00:27:50.158 "bdev_io_pool_size": 65535, 00:27:50.158 "bdev_io_cache_size": 256, 00:27:50.158 "bdev_auto_examine": true, 00:27:50.158 "iobuf_small_cache_size": 128, 00:27:50.158 "iobuf_large_cache_size": 16 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_raid_set_options", 00:27:50.158 "params": { 00:27:50.158 "process_window_size_kb": 1024 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_iscsi_set_options", 00:27:50.158 "params": { 00:27:50.158 "timeout_sec": 30 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_nvme_set_options", 00:27:50.158 "params": { 00:27:50.158 "action_on_timeout": "none", 00:27:50.158 "timeout_us": 0, 00:27:50.158 "timeout_admin_us": 0, 00:27:50.158 "keep_alive_timeout_ms": 10000, 00:27:50.158 "arbitration_burst": 0, 00:27:50.158 "low_priority_weight": 0, 00:27:50.158 "medium_priority_weight": 0, 00:27:50.158 "high_priority_weight": 0, 00:27:50.158 "nvme_adminq_poll_period_us": 10000, 00:27:50.158 "nvme_ioq_poll_period_us": 0, 00:27:50.158 "io_queue_requests": 512, 00:27:50.158 "delay_cmd_submit": true, 00:27:50.158 "transport_retry_count": 4, 00:27:50.158 "bdev_retry_count": 3, 00:27:50.158 "transport_ack_timeout": 0, 00:27:50.158 "ctrlr_loss_timeout_sec": 0, 00:27:50.158 "reconnect_delay_sec": 0, 00:27:50.158 "fast_io_fail_timeout_sec": 0, 00:27:50.158 "disable_auto_failback": false, 00:27:50.158 "generate_uuids": false, 00:27:50.158 "transport_tos": 0, 00:27:50.158 "nvme_error_stat": false, 00:27:50.158 "rdma_srq_size": 0, 00:27:50.158 "io_path_stat": false, 00:27:50.158 "allow_accel_sequence": false, 00:27:50.158 "rdma_max_cq_size": 0, 00:27:50.158 "rdma_cm_event_timeout_ms": 0, 00:27:50.158 "dhchap_digests": [ 00:27:50.158 "sha256", 00:27:50.158 "sha384", 00:27:50.158 "sha512" 00:27:50.158 ], 00:27:50.158 "dhchap_dhgroups": [ 00:27:50.158 "null", 00:27:50.158 "ffdhe2048", 00:27:50.158 "ffdhe3072", 00:27:50.158 "ffdhe4096", 00:27:50.158 "ffdhe6144", 00:27:50.158 "ffdhe8192" 00:27:50.158 ] 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_nvme_attach_controller", 00:27:50.158 "params": { 00:27:50.158 "name": "nvme0", 00:27:50.158 "trtype": "TCP", 00:27:50.158 "adrfam": "IPv4", 00:27:50.158 "traddr": "127.0.0.1", 00:27:50.158 "trsvcid": "4420", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.158 "prchk_reftag": false, 00:27:50.158 "prchk_guard": false, 00:27:50.158 "ctrlr_loss_timeout_sec": 0, 00:27:50.158 "reconnect_delay_sec": 0, 00:27:50.158 "fast_io_fail_timeout_sec": 0, 00:27:50.158 "psk": "key0", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.158 "hdgst": false, 00:27:50.158 "ddgst": false 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_nvme_set_hotplug", 00:27:50.158 "params": { 00:27:50.158 "period_us": 100000, 00:27:50.158 "enable": false 00:27:50.158 } 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "method": "bdev_wait_for_examine" 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }, 00:27:50.158 { 00:27:50.158 "subsystem": "nbd", 00:27:50.158 "config": [] 00:27:50.158 } 00:27:50.158 ] 00:27:50.158 }' 00:27:50.158 16:44:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.158 16:44:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:50.158 [2024-07-15 16:44:29.560782] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:50.158 [2024-07-15 16:44:29.560860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649682 ] 00:27:50.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.158 [2024-07-15 16:44:29.621291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.158 [2024-07-15 16:44:29.734738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.416 [2024-07-15 16:44:29.923099] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:50.981 16:44:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.981 16:44:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:50.981 16:44:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:50.981 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.981 16:44:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:51.239 16:44:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:51.239 16:44:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:51.239 16:44:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:51.239 16:44:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:51.239 16:44:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.239 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.239 16:44:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:51.496 16:44:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:51.496 16:44:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:51.496 16:44:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:51.496 16:44:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:51.496 16:44:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.496 16:44:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.496 16:44:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:51.754 16:44:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:51.754 16:44:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:51.754 16:44:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:51.754 16:44:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:52.011 16:44:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:52.011 16:44:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:52.011 16:44:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HIJQzgaISI /tmp/tmp.amfaFbnpFg 00:27:52.011 16:44:31 keyring_file -- keyring/file.sh@20 -- # killprocess 1649682 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1649682 ']' 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1649682 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1649682 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1649682' 00:27:52.011 killing process with pid 1649682 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@967 -- # kill 1649682 00:27:52.011 Received shutdown signal, test time was about 1.000000 seconds 00:27:52.011 00:27:52.011 Latency(us) 00:27:52.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.011 =================================================================================================================== 00:27:52.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:52.011 16:44:31 keyring_file -- common/autotest_common.sh@972 -- # wait 1649682 00:27:52.269 16:44:31 keyring_file -- keyring/file.sh@21 -- # killprocess 1648200 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1648200 ']' 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1648200 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1648200 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1648200' 00:27:52.269 killing process with pid 1648200 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@967 -- # kill 1648200 00:27:52.269 [2024-07-15 16:44:31.843487] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:52.269 16:44:31 keyring_file -- common/autotest_common.sh@972 -- # wait 1648200 00:27:52.834 00:27:52.834 real 0m14.838s 00:27:52.834 user 0m35.792s 00:27:52.834 sys 0m3.435s 00:27:52.834 16:44:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.834 16:44:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:52.834 ************************************ 00:27:52.834 END TEST keyring_file 00:27:52.834 ************************************ 00:27:52.834 16:44:32 -- common/autotest_common.sh@1142 -- # return 0 00:27:52.834 16:44:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:52.834 16:44:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:52.834 16:44:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.834 16:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.834 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:27:52.834 ************************************ 00:27:52.834 START TEST keyring_linux 00:27:52.834 ************************************ 00:27:52.834 16:44:32 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:52.834 * Looking for test storage... 00:27:52.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.834 16:44:32 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.834 16:44:32 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.834 16:44:32 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.834 16:44:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.834 16:44:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.834 16:44:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.834 16:44:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:52.834 16:44:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:52.834 16:44:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:52.834 16:44:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:52.834 16:44:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:53.092 /tmp/:spdk-test:key0 00:27:53.092 16:44:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:53.092 16:44:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:53.092 16:44:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:53.092 /tmp/:spdk-test:key1 00:27:53.092 16:44:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1650167 00:27:53.092 16:44:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:53.092 16:44:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1650167 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1650167 ']' 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.092 16:44:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:53.092 [2024-07-15 16:44:32.564282] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:53.092 [2024-07-15 16:44:32.564369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650167 ] 00:27:53.092 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.092 [2024-07-15 16:44:32.622841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.350 [2024-07-15 16:44:32.732218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.913 16:44:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.913 16:44:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:53.913 16:44:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:53.913 16:44:33 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.913 16:44:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:53.913 [2024-07-15 16:44:33.506654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.170 null0 00:27:54.170 [2024-07-15 16:44:33.538690] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:54.170 [2024-07-15 16:44:33.539150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.170 16:44:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:54.170 477285420 00:27:54.170 16:44:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:54.170 518541455 00:27:54.170 16:44:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1650302 00:27:54.170 16:44:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:54.170 16:44:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1650302 /var/tmp/bperf.sock 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1650302 ']' 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.170 16:44:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:54.170 [2024-07-15 16:44:33.607817] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:27:54.170 [2024-07-15 16:44:33.607898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650302 ] 00:27:54.170 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.170 [2024-07-15 16:44:33.673424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.427 [2024-07-15 16:44:33.791424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.427 16:44:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.427 16:44:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:54.427 16:44:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:54.427 16:44:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:54.684 16:44:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:54.684 16:44:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:54.941 16:44:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:54.941 16:44:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:55.197 [2024-07-15 16:44:34.626236] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:55.197 nvme0n1 00:27:55.197 16:44:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:55.197 16:44:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:55.197 16:44:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:55.197 16:44:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:55.197 16:44:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:55.197 16:44:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.454 16:44:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:55.454 16:44:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:55.454 16:44:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:55.454 16:44:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:55.454 16:44:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:55.454 16:44:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.454 16:44:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@25 -- # sn=477285420 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 477285420 == \4\7\7\2\8\5\4\2\0 ]] 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 477285420 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:55.711 16:44:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:55.969 Running I/O for 1 seconds... 00:27:56.901 00:27:56.901 Latency(us) 00:27:56.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.901 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:56.901 nvme0n1 : 1.02 4016.70 15.69 0.00 0.00 31555.06 6699.24 40001.23 00:27:56.901 =================================================================================================================== 00:27:56.901 Total : 4016.70 15.69 0.00 0.00 31555.06 6699.24 40001.23 00:27:56.901 0 00:27:56.901 16:44:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:56.901 16:44:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:57.158 16:44:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:57.158 16:44:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:57.158 16:44:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:57.158 16:44:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:57.158 16:44:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:57.158 16:44:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:57.418 16:44:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:57.418 16:44:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:57.418 16:44:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:57.418 16:44:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.418 16:44:36 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:57.418 16:44:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:57.721 [2024-07-15 16:44:37.120293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:57.721 [2024-07-15 16:44:37.120768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e43f0 (107): Transport endpoint is not connected 00:27:57.721 [2024-07-15 16:44:37.121758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e43f0 (9): Bad file descriptor 00:27:57.721 [2024-07-15 16:44:37.122758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:57.721 [2024-07-15 16:44:37.122776] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:57.721 [2024-07-15 16:44:37.122803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:57.721 request: 00:27:57.721 { 00:27:57.721 "name": "nvme0", 00:27:57.721 "trtype": "tcp", 00:27:57.721 "traddr": "127.0.0.1", 00:27:57.721 "adrfam": "ipv4", 00:27:57.721 "trsvcid": "4420", 00:27:57.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.721 "prchk_reftag": false, 00:27:57.721 "prchk_guard": false, 00:27:57.721 "hdgst": false, 00:27:57.721 "ddgst": false, 00:27:57.721 "psk": ":spdk-test:key1", 00:27:57.721 "method": "bdev_nvme_attach_controller", 00:27:57.721 "req_id": 1 00:27:57.721 } 00:27:57.721 Got JSON-RPC error response 00:27:57.721 response: 00:27:57.721 { 00:27:57.721 "code": -5, 00:27:57.721 "message": "Input/output error" 00:27:57.721 } 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@33 -- # sn=477285420 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 477285420 00:27:57.721 1 links removed 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@33 -- # sn=518541455 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 518541455 00:27:57.721 1 links removed 00:27:57.721 16:44:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1650302 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1650302 ']' 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1650302 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1650302 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1650302' 00:27:57.721 killing process with pid 1650302 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 1650302 00:27:57.721 Received shutdown signal, test time was about 1.000000 seconds 00:27:57.721 00:27:57.721 Latency(us) 00:27:57.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.721 =================================================================================================================== 00:27:57.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:57.721 16:44:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 1650302 00:27:57.979 16:44:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1650167 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1650167 ']' 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1650167 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1650167 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1650167' 00:27:57.979 killing process with pid 1650167 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 1650167 00:27:57.979 16:44:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 1650167 00:27:58.547 00:27:58.547 real 0m5.600s 00:27:58.547 user 0m10.158s 00:27:58.547 sys 0m1.557s 00:27:58.547 16:44:37 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:58.547 16:44:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:58.547 ************************************ 00:27:58.547 END TEST keyring_linux 00:27:58.547 ************************************ 00:27:58.547 16:44:37 -- common/autotest_common.sh@1142 -- # return 0 00:27:58.547 16:44:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:58.547 16:44:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:58.547 16:44:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:58.547 16:44:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:58.547 16:44:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:58.547 16:44:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:58.547 16:44:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:58.547 16:44:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.547 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:27:58.547 16:44:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:58.547 16:44:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:58.547 16:44:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:58.547 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:28:00.450 INFO: APP EXITING 00:28:00.450 INFO: killing all VMs 00:28:00.450 INFO: killing vhost app 00:28:00.450 INFO: EXIT DONE 00:28:01.384 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:01.384 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:01.384 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:01.384 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:01.384 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:01.384 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:01.384 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:01.384 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:01.384 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:01.384 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:01.384 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:01.384 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:01.384 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:01.384 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:01.384 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:01.384 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:01.384 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:02.758 Cleaning 00:28:02.758 Removing: /var/run/dpdk/spdk0/config 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:02.758 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:02.758 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:02.758 Removing: /var/run/dpdk/spdk1/config 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:02.758 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:02.758 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:02.758 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:02.758 Removing: /var/run/dpdk/spdk2/config 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:02.758 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:02.758 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:02.758 Removing: /var/run/dpdk/spdk3/config 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:02.758 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:02.758 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:02.758 Removing: /var/run/dpdk/spdk4/config 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:02.758 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:02.758 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:02.758 Removing: /dev/shm/bdev_svc_trace.1 00:28:02.758 Removing: /dev/shm/nvmf_trace.0 00:28:02.758 Removing: /dev/shm/spdk_tgt_trace.pid1388671 00:28:02.758 Removing: /var/run/dpdk/spdk0 00:28:02.758 Removing: /var/run/dpdk/spdk1 00:28:02.758 Removing: /var/run/dpdk/spdk2 00:28:02.758 Removing: /var/run/dpdk/spdk3 00:28:02.758 Removing: /var/run/dpdk/spdk4 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1386997 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1387739 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1388671 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1389108 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1389802 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1389944 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1390662 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1390667 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1390909 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1392151 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1393151 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1393467 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1393778 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1393982 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1394218 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1394450 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1394723 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1394916 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1395134 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398088 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398251 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398413 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398424 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398847 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1398983 00:28:02.758 Removing: /var/run/dpdk/spdk_pid1399289 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1399423 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1399587 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1399715 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1399888 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1399894 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1400302 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1400538 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1400738 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1400910 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401062 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401127 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401403 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401562 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401718 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1401997 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1402152 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1402330 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1402585 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1402744 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1402957 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1403180 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1403332 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1403585 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1403763 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1403927 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1404193 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1404358 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1404517 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1404794 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1404954 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1405115 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1405306 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1405510 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1407682 00:28:03.016 Removing: /var/run/dpdk/spdk_pid1434735 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1437359 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1444210 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1447522 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1450002 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1450520 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1454504 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1458473 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1458477 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1459017 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1459671 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1460336 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1460735 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1460738 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1460884 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1461016 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1461019 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1461676 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1462445 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1463103 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1463948 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1464012 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1464163 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1465047 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1465763 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1471243 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1471520 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1474054 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1477857 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1479915 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1486442 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1491639 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1492953 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1493614 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1504327 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1506537 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1532060 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1534848 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1536033 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1537347 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1537386 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1537507 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1537645 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1538086 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1539402 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1540251 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1540562 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1542192 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1542738 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1543190 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1545702 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1551733 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1555127 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1558889 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1559834 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1560936 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1563464 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1565698 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1569902 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1569906 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1572803 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1572943 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1573195 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1573464 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1573470 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1576231 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1576561 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1579218 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1581200 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1584617 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1588060 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1594905 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1599378 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1599382 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1611592 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1612002 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1612470 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1612936 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1613527 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1614027 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1614463 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1614871 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1617376 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1617641 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1621429 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1621490 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1623215 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1628747 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1628759 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1631647 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1633041 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1634445 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1635190 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1636717 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1637596 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1642850 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1643130 00:28:03.017 Removing: /var/run/dpdk/spdk_pid1643521 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1645079 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1645360 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1645758 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1648200 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1648337 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1649682 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1650167 00:28:03.275 Removing: /var/run/dpdk/spdk_pid1650302 00:28:03.275 Clean 00:28:03.275 16:44:42 -- common/autotest_common.sh@1451 -- # return 0 00:28:03.275 16:44:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:03.275 16:44:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.275 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:28:03.275 16:44:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:03.275 16:44:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.275 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:28:03.275 16:44:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:03.275 16:44:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:03.275 16:44:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:03.275 16:44:42 -- spdk/autotest.sh@391 -- # hash lcov 00:28:03.275 16:44:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:03.275 16:44:42 -- spdk/autotest.sh@393 -- # hostname 00:28:03.275 16:44:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:03.532 geninfo: WARNING: invalid characters removed from testname! 00:28:35.587 16:45:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:36.521 16:45:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:39.796 16:45:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:43.102 16:45:22 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:47.280 16:45:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:50.560 16:45:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:54.739 16:45:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:54.739 16:45:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.739 16:45:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:54.739 16:45:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.739 16:45:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.739 16:45:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.739 16:45:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.739 16:45:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.739 16:45:33 -- paths/export.sh@5 -- $ export PATH 00:28:54.739 16:45:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.739 16:45:33 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:54.739 16:45:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:54.739 16:45:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721054733.XXXXXX 00:28:54.739 16:45:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721054733.pWWoXD 00:28:54.739 16:45:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:54.739 16:45:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:54.739 16:45:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:54.739 16:45:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:54.739 16:45:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:54.739 16:45:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:54.739 16:45:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:54.739 16:45:33 -- common/autotest_common.sh@10 -- $ set +x 00:28:54.739 16:45:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:54.739 16:45:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:54.739 16:45:33 -- pm/common@17 -- $ local monitor 00:28:54.739 16:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.739 16:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.739 16:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.739 16:45:33 -- pm/common@21 -- $ date +%s 00:28:54.739 16:45:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.739 16:45:33 -- pm/common@21 -- $ date +%s 00:28:54.739 16:45:33 -- pm/common@25 -- $ sleep 1 00:28:54.739 16:45:33 -- pm/common@21 -- $ date +%s 00:28:54.739 16:45:33 -- pm/common@21 -- $ date +%s 00:28:54.739 16:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054733 00:28:54.739 16:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054733 00:28:54.739 16:45:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054733 00:28:54.739 16:45:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054733 00:28:54.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054733_collect-vmstat.pm.log 00:28:54.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054733_collect-cpu-temp.pm.log 00:28:54.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054733_collect-cpu-load.pm.log 00:28:54.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054733_collect-bmc-pm.bmc.pm.log 00:28:54.998 16:45:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:54.998 16:45:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:54.998 16:45:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:54.998 16:45:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:54.998 16:45:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:54.998 16:45:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:54.998 16:45:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:54.998 16:45:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:54.998 16:45:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:55.257 16:45:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:55.257 16:45:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:55.257 16:45:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:55.257 16:45:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:55.257 16:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.257 16:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:55.257 16:45:34 -- pm/common@44 -- $ pid=1660639 00:28:55.257 16:45:34 -- pm/common@50 -- $ kill -TERM 1660639 00:28:55.257 16:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.257 16:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:55.257 16:45:34 -- pm/common@44 -- $ pid=1660641 00:28:55.257 16:45:34 -- pm/common@50 -- $ kill -TERM 1660641 00:28:55.257 16:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.257 16:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:55.257 16:45:34 -- pm/common@44 -- $ pid=1660643 00:28:55.257 16:45:34 -- pm/common@50 -- $ kill -TERM 1660643 00:28:55.257 16:45:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.257 16:45:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:55.257 16:45:34 -- pm/common@44 -- $ pid=1660674 00:28:55.257 16:45:34 -- pm/common@50 -- $ sudo -E kill -TERM 1660674 00:28:55.257 + [[ -n 1303479 ]] 00:28:55.257 + sudo kill 1303479 00:28:55.267 [Pipeline] } 00:28:55.277 [Pipeline] // stage 00:28:55.281 [Pipeline] } 00:28:55.293 [Pipeline] // timeout 00:28:55.298 [Pipeline] } 00:28:55.312 [Pipeline] // catchError 00:28:55.316 [Pipeline] } 00:28:55.330 [Pipeline] // wrap 00:28:55.336 [Pipeline] } 00:28:55.348 [Pipeline] // catchError 00:28:55.355 [Pipeline] stage 00:28:55.357 [Pipeline] { (Epilogue) 00:28:55.369 [Pipeline] catchError 00:28:55.370 [Pipeline] { 00:28:55.382 [Pipeline] echo 00:28:55.383 Cleanup processes 00:28:55.388 [Pipeline] sh 00:28:55.670 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.670 1660792 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:55.670 1660905 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.684 [Pipeline] sh 00:28:55.989 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.989 ++ grep -v 'sudo pgrep' 00:28:55.989 ++ awk '{print $1}' 00:28:55.989 + sudo kill -9 1660792 00:28:56.001 [Pipeline] sh 00:28:56.281 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:06.293 [Pipeline] sh 00:29:06.581 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:06.581 Artifacts sizes are good 00:29:06.598 [Pipeline] archiveArtifacts 00:29:06.606 Archiving artifacts 00:29:06.825 [Pipeline] sh 00:29:07.110 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:07.128 [Pipeline] cleanWs 00:29:07.140 [WS-CLEANUP] Deleting project workspace... 00:29:07.140 [WS-CLEANUP] Deferred wipeout is used... 00:29:07.148 [WS-CLEANUP] done 00:29:07.150 [Pipeline] } 00:29:07.169 [Pipeline] // catchError 00:29:07.183 [Pipeline] sh 00:29:07.463 + logger -p user.info -t JENKINS-CI 00:29:07.471 [Pipeline] } 00:29:07.485 [Pipeline] // stage 00:29:07.490 [Pipeline] } 00:29:07.506 [Pipeline] // node 00:29:07.511 [Pipeline] End of Pipeline 00:29:07.543 Finished: SUCCESS